Now that the second part of the course v3 is over, we will port a lot of changes that were introduced in it inside the library. Below here is the roadmap for v1.2 which will contain a few breaking changes. If you see anything from the lessons that you thought would be nice to have in fastai v1.2 that’s not mentioned here and isn’t in 1.0, let us know! And if there’s things in 1.0 which you think might be incompatible with any of the ideas below, please tell us that too.
Since this information is based on part2-v3 we’re putting it in this forum. We’ll provide information in #fastai-users:fastai-dev closer to the release.
Important
There will be one last release of v1.0, which will then live in a separate branch for bug fixes. Once we’re ready, the preparation of v1.2 will be done on master, so master will be very unstable for a while until we’ve stopped breaking everything. We expect to begin in a week or two. During initial development we’ll be doing releases as v1.1, and once it seems to be working OK we’ll do a release as v1.2. So please avoid installing any 1.1 release unless you’re interested in contributing to 1.1 development and testing (i.e. don’t expect it to work!) And stick with 1.0 until you’re ready to port your code to the new version.
New callback system
One of the highlights of v1.2.0 will be the new callback system developed in notebook 04 then refined in 9b. This will be a breaking change for anyone having custom callbacks (they’ll need to rewrite them with the new API) but shouldn’t impact most of the users.
We will use something similar to the namespaces in 11a to have tab-completion and typo-proof event names
As a consequence, DeviceDataLoader
will disappear and be replaced by the cuda callback we have in 04. DataBunch.normalize
will also change to use a callback behind the scenes. All callbacks will have to be rewritten, particularly MixedPrecision to use Apex utilities (we will either build a conda package for the ones we need or use the code from NVIDIA).
Data block API supercharged
The final design is still being discussed but the idea is to go toward a DataBlock
class where the user provide the four essential functions needed to assemble the data (download/get the source, fetch the items, split the items, label the items) a bit like what is in the 08c data block swift notebook (which was inspired by a refactoring from @AlexisGallagher). This will be the most significant breaking change, but we think it will help a lot in making the use of the data block API smoother for everyone and its customization even easier.
For instance, this is how on the pets dataset:
class PetsData(DataBlock):
x_cls,y_cls = ImageGetter,CategoryGetter
def get_source(self): return untar_data(URLs.PETS)
def get_items(self, source): return get_image_files(source/'images')
def split(self, items): return random_splitter(items)
def label(self, items): return label_from_re(items, pat=r'/([^/]+)_\d+.jpg$')
Then you can initialize with the transforms you want and directly get a DataBunch
:
data = PetsData(tfms=tfms).databunch()
Speaking of transforms…
Data augmentation
We won’t keep the current pipeline and propose a mix between:
- transforms of single image byte tensors on the CPU with PIL
- transforms of single image float tensors on the CPU with pytorch (i.e. similar to v1.0)
- transforms of batches of images on the GPU once the resize is done (like in notebook 10)
Stateful Optimizer
The Stateful Optimizer of notebook 09 will be added for easy custom optimizers. The default in fastai will be to use it for AdamW, probably with a greater epsilon than PyTorch since we’ve found this helps in a lot of situations.
NLP preprocessing
We will port the fast tokenizer from notebook 11 and try to address the RAM issue a lot of you have experienced, probably with a version of TokenizeProcessor
that uses temporary files.
There will also be a SentencepieceTokenizer
if you have installed sentencepiece (it won’t be a dependency of the library).