Major new changes and features


This topic is for announcements of all breaking changes in the API or new features you can use. You should subscribe to it to receive notifications about those.

It’s locked and closed so that only the admins can post in it. The developer chat is the place to discuss development of the library.

The full list of changes is always available in the changelog.


ImageItemList not defined
Lesson 1: NameError: name 'cnn_learner' is not defined
FastAi library function code deprecated question
pinned #2

closed #3


Breaking change v1.0.48: Learner.distributed became Learner.to_distributed.

PS: In previous version, v1.0.47:

  • create_cnn was deprecated to become cnn_learner
  • no_split was deprecated to become split_none
  • random_split_by_pct was deprecated to become split_by_rand_pct


v1.0.49 is out, the major change is a workaround a bug in PyTorch 1.0.1 and windows (see create_cnn hangs forever). It will now work properly.


Lesson 1 - Notebook stuck in create_cnn

v1.0.50 is live. The main new feature is bidirectional QRNN and backward QRNNLayer.



v1.0.51 is live. The main change is a bug fix in the MixUp callback and the ability to pass streams (buffers or file pointers) in the save/load/export methods (like …)



v1.0.53 is live

Breaking change: the default embedding size in the AWD LSTM has changed from 1150 to 1152. Why? Because 8 is the magic number and we need multiple of eights to take full advantage of mixed precision training. With just this change and making sure the vocab size is a multiple of 8, pretraining a model on Wikitext-103 takes 6 hours instead of 14 to 20. FIne-tuning on IMDB takes one hour instead of 3 (as long as you have a modern GPU)

New exciting things: a backward pretrained model (demonstrated in this example reproducing the 95.4% on IMDB from ULMFit) and an experimental sentence piece tokenizer.


Language_model_learner not working as before?
Loading pre-trained weights from a local file rather than from a URL

v1.0.56 is live. Apart from a few bug fixes, the main addition is that QRNNs now support mixed-precision training (thanks to a suggestion from @TomB).

As for other models, if you’re using a modern GPU and make sure all your tensors have dimensions that are a multiple of 8, you can hope for a 2x speed-up in training.