Major new changes and features

This topic is for announcements of all breaking changes in the API or new features you can use. You should subscribe to it to receive notifications about those.

It’s locked and closed so that only the admins can post in it. The developer chat is the place to discuss development of the library.

The full list of changes is always available in the changelog.

80 Likes

Breaking change v1.0.48: Learner.distributed became Learner.to_distributed.

PS: In previous version, v1.0.47:

  • create_cnn was deprecated to become cnn_learner
  • no_split was deprecated to become split_none
  • random_split_by_pct was deprecated to become split_by_rand_pct
1 Like

v1.0.49 is out, the major change is a workaround a bug in PyTorch 1.0.1 and windows (see create_cnn hangs forever). It will now work properly.

5 Likes

v1.0.50 is live. The main new feature is bidirectional QRNN and backward QRNNLayer.

8 Likes

v1.0.51 is live. The main change is a bug fix in the MixUp callback and the ability to pass streams (buffers or file pointers) in the save/load/export methods (like Learner.save …)

5 Likes

v1.0.53 is live

Breaking change: the default embedding size in the AWD LSTM has changed from 1150 to 1152. Why? Because 8 is the magic number and we need multiple of eights to take full advantage of mixed precision training. With just this change and making sure the vocab size is a multiple of 8, pretraining a model on Wikitext-103 takes 6 hours instead of 14 to 20. FIne-tuning on IMDB takes one hour instead of 3 (as long as you have a modern GPU)

New exciting things: a backward pretrained model (demonstrated in this example reproducing the 95.4% on IMDB from ULMFit) and an experimental sentence piece tokenizer.

23 Likes

v1.0.56 is live. Apart from a few bug fixes, the main addition is that QRNNs now support mixed-precision training (thanks to a suggestion from @TomB).

As for other models, if you’re using a modern GPU and make sure all your tensors have dimensions that are a multiple of 8, you can hope for a 2x speed-up in training.

17 Likes

@trust_level_0 There are major changes coming so I figured I should send you all a message to let you know to keep an eye out for them. I’ll update this thread with updates, so please watch this thread, by clicking the “Normal” button at the bottom of the thread, and selecting “Watch”:

fastai v2 will be released in the next few weeks, and it is not API-compatible with fastai v1 (it’s a from-scratch rewrite). It’s much easier to use, more powerful, and better documented than v1, and there’s even a book (624 pages!) about it. The book is also available for free as Jupyter notebooks. If you’re interested in experimenting with the pre-release version of fastai v2, you can get it here: http://dev.fast.ai/ .

fastai v2 only works with the upcoming 2020 version of the course. It won’t work with any previous version. The 2020 version of the course will be available at the same time that fastai v2 is officially released. If you’re currently working through one of the existing courses, keep going! :slight_smile: The basic concepts you’re learning will be just as useful for fastai v2. There is no 2020 version of part 2 of the course recorded yet, and we don’t have a date for when that might happen.

The 2020 version of the course includes material covering both machine learning and deep learning. So there won’t be a separate “Introduction to Machine Learning” (although the old one will still be available).

fastai v1 will continue to be available, and we’ll continue to provide bug fixes (and accept pull requests for it). To pin your fastai version to v1 (i.e., to avoid it upgrading automatically to v2), run the following command (assuming you use conda):

 echo 'fastai 1.*' >> $CONDA_PREFIX/conda-meta/pinned

Then, when you’re ready to upgrade to v2, remove the $CONDA_PREFIX/conda-meta/pinned file.

The github repo for fastai v1 will shortly be renamed to fastai/fastai1, and the repo for v2 will shortly be renamed from fastai/fastai2 to fastai/fastai.

If you’re interested in getting involved in fastai2 development, or just watching my live coding sessions (which I do most days), connect to our Discord server, which is where I stream my live coding, and there’s some real-time fastai2 development discussion:

37 Likes

There are major changes coming so I figured I should send you all a message to let you know to keep an eye out for them. I’ll update this thread with updates, so please watch this thread, by clicking the “Normal” button at the bottom of the thread, and selecting “Watch”:

fastai v2 will be released in the next few weeks, and it is not API-compatible with fastai v1 (it’s a from-scratch rewrite). It’s much easier to use, more powerful, and better documented than v1, and there’s even a book (624 pages!) about it. The book is also available for free as Jupyter notebooks. If you’re interested in experimenting with the pre-release version of fastai v2, you can get it here: http://dev.fast.ai/ .

fastai v2 only works with the upcoming 2020 version of the course. It won’t work with any previous version. The 2020 version of the course will be available at the same time that fastai v2 is officially released. If you’re currently working through one of the existing courses, keep going! :slight_smile: The basic concepts you’re learning will be just as useful for fastai v2. There is no 2020 version of part 2 of the course recorded yet, and we don’t have a date for when that might happen.

The 2020 version of the course includes material covering both machine learning and deep learning. So there won’t be a separate “Introduction to Machine Learning” (although the old one will still be available).

fastai v1 will continue to be available, and we’ll continue to provide bug fixes (and accept pull requests for it). To pin your fastai version to v1 (i.e., to avoid it upgrading automatically to v2), run the following command (assuming you use conda):

 echo 'fastai 1.*' >> $CONDA_PREFIX/conda-meta/pinned

Then, when you’re ready to upgrade to v2, remove the $CONDA_PREFIX/conda-meta/pinned file.

The github repo for fastai v1 will shortly be renamed to fastai/fastai1, and the repo for v2 will shortly be renamed from fastai/fastai2 to fastai/fastai.

If you’re interested in getting involved in fastai2 development, or just watching my live coding sessions (which I do most days), connect to our Discord server, which is where I stream my live coding, and there’s some real-time fastai2 development discussion:

I’ll be moving all fastai development forum discussion into #fastai-users:fastai-dev. The forums are the best place to ask questions.

56 Likes

Note that the latest fastai2 requires PyTorch 1.6. If you want to use an earlier version of PyTorch (e.g. I think Kaggle and Colab haven’t upgraded yet) use 0.0.20 or earlier.

Alternatively, you can install PyTorch 1.6 in Colab (and maybe Kaggle too) using the instructions in this link:

34 Likes

There’s a change in PyTorch 1.6 to the file format that is used for saving models (see Deprecations). In fastai v1, in v1.0.63 we’re now setting a flag to tell PyTorch to continue to save models in the same format as before, to avoid compatibility problems.

However, in fastai v2, it’ll save in the new PyTorch 1.6 format, since fastai v2 requires PyTorch 1.6 anyway.

14 Likes

fastai2 and the new course will be released Aug 21st.

The new course includes both machine learning and deep learning - we don’t have separate courses for these two topics any more.

46 Likes

Change in fastai2 - all the callbacks that previously started with begin_ now start with before_ .

7 Likes

If you have written your own callbacks, here’s how to change them all to use the new event names (begin_*):

cd nbs
shopt -s globstar
perl -pi -e 's/begin_fit/before_fit/g' **/*.ipynb
perl -pi -e 's/begin_validate/before_validate/g' **/*.ipynb
perl -pi -e 's/begin_epoch/before_epoch/g' **/*.ipynb
perl -pi -e 's/begin_train/before_train/g' **/*.ipynb
perl -pi -e 's/begin_batch/before_batch/g' **/*.ipynb
nbdev_build_lib
10 Likes

The fastai v1 code has now been moved to the fastai1 repo and fastai2 has been moved to the fastai repo. No changes yet to pypi or conda versions. All commit history etc has been maintained.

This is to get ready for the v2 release on Friday.

15 Likes

fastai2 has been renamed to fastai now in master on every fast.ai repo. pypi and conda won’t be changed until tomorrow morning, however.

To change fastai2 to fastai in your code, run the following in a directory containing files you’d like to modify:

shopt -s globstar
perl -pi -e 's/fastai2/fastai/g' **/*
20 Likes

In fastcore, store_attr's API has changed. self is now the second parameter, and is optional, and the list of params is optional too.

7 Likes

I just pushed a significant refactor of fastcore to master, which moves metaclasses and delegates to a new meta module, and move log_args to its own module, and removes inspect from all of foundation , utils , and dispatch . These will ~double the speed of importing these modules.

22 Likes