I’ve started this topic because pytroch 0.4 is coming and some of us are now experimenting with it and fastai. I’ve run into some issues with it and for the rest of us that are not using 0.4, it doesn’t seem like a good idea to post stuff about it in the other forums. So please post your questions, comments, etc. about pytorch 0.4 here.
IMDB hangs on Tokenizer().proc_all_mp(partition_by_cores(texts)) , In side by side tests using the IMDB notebook (pytorch 0.31 vs. 0.4) on my Linux box, the proc_all_mp hangs and never completes on 0.4 and works fine on 0.31. I know this is a native python function and not related to pytorch, but it happens and it can be “fixed” by reducing the number of cores passes into the function.
Running the imdb notebook I ran into an error when the forward function of the RNN_Encoder tried to run the EmbeddingDropout.forward() method. At line:
X = self.embed._backend.Embedding.apply(words,
masked_embed_weight, padding_idx, self.embed.max_norm,
self.embed.norm_type, self.embed.scale_grad_by_freq, self.embed.sparse)
it crashes with a NotImplemented error. Setting a breakpoint shows: