Major new changes and features

#1

This topic is for announcements of all breaking changes in the API or new features you can use. You should subscribe to it to receive notifications about those.

It’s locked and closed so that only the admins can post in it. The developer chat is the place to discuss development of the library.

The full list of changes is always available in the changelog.

52 Likes

ImageItemList not defined
Lesson 1: NameError: name 'cnn_learner' is not defined
FastAi library function code deprecated question
pinned #2
0 Likes

closed #3
0 Likes

#4

Breaking change v1.0.48: Learner.distributed became Learner.to_distributed.

PS: In previous version, v1.0.47:

  • create_cnn was deprecated to become cnn_learner
  • no_split was deprecated to become split_none
  • random_split_by_pct was deprecated to become split_by_rand_pct
0 Likes

#5

v1.0.49 is out, the major change is a workaround a bug in PyTorch 1.0.1 and windows (see create_cnn hangs forever). It will now work properly.

5 Likes

Lesson 1 - Notebook stuck in create_cnn
#6

v1.0.50 is live. The main new feature is bidirectional QRNN and backward QRNNLayer.

8 Likes

#7

v1.0.51 is live. The main change is a bug fix in the MixUp callback and the ability to pass streams (buffers or file pointers) in the save/load/export methods (like Learner.save …)

3 Likes

#8

v1.0.53 is live

Breaking change: the default embedding size in the AWD LSTM has changed from 1150 to 1152. Why? Because 8 is the magic number and we need multiple of eights to take full advantage of mixed precision training. With just this change and making sure the vocab size is a multiple of 8, pretraining a model on Wikitext-103 takes 6 hours instead of 14 to 20. FIne-tuning on IMDB takes one hour instead of 3 (as long as you have a modern GPU)

New exciting things: a backward pretrained model (demonstrated in this example reproducing the 95.4% on IMDB from ULMFit) and an experimental sentence piece tokenizer.

18 Likes

Language_model_learner not working as before?
Loading pre-trained weights from a local file rather than from a URL