Summary of what’s mentioned in this post
- Book release expected Aug 4; library and course release a couple of weeks before that.
- In the next month or two I’ll be wrapping up my work with Masks4All, and the book and course, and will be working full time after that on fastai. @sgugger will not be full-time with fast.ai any more, but will be continuing to contribute.
- I plan to focus on developing the fastai library until the problem of AI accessibility is solved, however long that takes.
- fastai is unique in providing a complete end-to-end solution, and we want to focus on the unique benefits this provides
- We’ll be working on improving some things in fastai
- fastai makes it easy to use whatever bits you need on their own, such as just using the training loop, but the documentation needs to be improved showing better examples of this.
- The error messages are sometimes too long and confusing, and this needs to be fixed.
- The community contribution process needs work, to take best advantage of the amazing community we have here.
With the fastai2, course, and book releases coming up soon, it seems like a good time to provide an update on progress and plans.
In the last few months, we’ve finished writing a 600-page book, and recording an 8-lesson course based on the book. We have tried to avoid making significant changes to the fastai2 library during this time, since we wanted to ensure that all the features that were already there were stable and well tested. We’ve been adding a lot of tutorials and improving the documentation of fastai2 during this time, as well as squashing bugs.
The planned release date for the book is August 4th. We’ll probably release the new course (v4) and fastai2 at the same time, a couple of weeks before the book.
As you probably know, @sgugger has recently moved to Hugging Face. He has been an amazing contributor to everything that fast.ai has worked on during his time here, and is widely admired in the community. Whilst he has been working full-time on fast.ai, for the last few months I’ve been spending much of my time on Masks4All - this has meant much less time on fast.ai for me recently, but I did feel that I should prioritize something that had such immediate and significant impacts on global health.
The good news is that the Masks4All project has been successful, and >95% of the world’s population now lives in regions that recommend our require masks. With the completion of Masks4All, the book, and the course, over the next couple of months, I’ll be able to move my focus back to the fastai2 library, which I’ll be maintaining with the help of the wonderful community here, and @sgugger will continue to be involved too.
fast.ai, and particularly the fastai library, is my passion. It’s what I want to keep doing for the rest of my life - or until we totally solve the problem of making AI accessible to everyone! Whilst other smaller projects will come and go, fastai is the thing that will remain my primary focus.
Since I started the first version fastai back when PyTorch was first released, quite a few other libraries have popped up that handle various pieces of what fastai handles, such as training loop libraries and data augmentation libraries. This is a good thing! More options mean that more parts of the innovation space are being explored, and open source libraries can take advantage of ideas that work out well in other projects.
Having said that, I don’t think that projects that only tackle subsets of the functionality needed to prepare data and train and host a model can ever achieve what we’re aiming for with fastai. For instance, how do you save a model in a way that the data processing requirements for the model to work are included, if data processing isn’t part of the library you used to build the model? How do you show predictions and model errors if your library doesn’t know how to decode the tensors for display purposes (which means needing to know the data processing used to create them)? How do you create an application that needs little or no code to solve real practical problems if your library only handles a subset of the functionality required?
I hope that one day we’ll be able to build “no code” applications for practical deep learning with fastai2, and this isn’t possible if the end-to-end process isn’t handled by the underlying library.
One of the benefits of building fastai2 as a complete solution using a layered API, is that all the pieces work in a decoupled way, but are very extensible and flexible. This is important in practice for both practitioners and researchers. For instance, a training loop without a complete callback system doesn’t allow users to fit together a solution for their needs that plugs together existing callbacks that combine to do whatever they need. Instead, each piece of functionality has to be added to the underlying library, or added using some limited set of hook points that are provided. It’s important to us that users can grab whatever functionality they need from callbacks, and expect any combination of them to “just work”.
Having said that, often people really do just want a training loop, or some data processing, or to serve a pretrained model, and so forth. Currently, our documentation doesn’t do a great job showing how to do these things on their own, which can give the impression that, for instance, people need to understand Transforms and Pipelines and type dispatch and so forth before they can implement a training loop. That’s certainly not the case, and we need to do a better job of demonstrating that. I’ll be building more complete examples based on Kaggle competitions and other datasets, focusing on simplicity and demonstrating how to use subsets of fastai2 functionality both on their own, and in conjunction with other libraries, such as DALI and albumentations.
One area where fastai2 is currently wanting is the error messages when things don’t work right. Sometimes the stack traces are just too long, and that makes it hard to debug. That’s something that I’ll be focusing on in the coming months, because clear and understandable errors and stack traces are important for the developer experience.
Another area that we need to improve is more clearly explaining how to contribute to the library, and what contributions would be most helpful. Unlike some open source projects, fastai doesn’t have an explicit goal of maximizing the number of contributors. I’ve noticed that projects with many contributors can over time become burdened by technical debt and community management overhead. Instead, our goal is to maximize quality of contributions. We could certainly be doing a much better job of explaining how to contribute most effectively, and better working with the many brilliant community members here to develop a more clear and scalable approach to community contributions.
Thanks to you all for everything you’ve done for fast.ai and the community. Every time you answer a fellow community member’s question, share your project or writing, and everything else you do to contribute, you help to make this the most helpful, kind, practical, and inspiring place for everyone involved in deep learning!
If you have any questions, please ask!