Update on fastai2 progress and next steps

Summary of what’s mentioned in this post

  • Book release expected Aug 4; library and course release a couple of weeks before that.
  • In the next month or two I’ll be wrapping up my work with Masks4All, and the book and course, and will be working full time after that on fastai. @sgugger will not be full-time with fast.ai any more, but will be continuing to contribute.
  • I plan to focus on developing the fastai library until the problem of AI accessibility is solved, however long that takes.
  • fastai is unique in providing a complete end-to-end solution, and we want to focus on the unique benefits this provides
  • We’ll be working on improving some things in fastai
    • fastai makes it easy to use whatever bits you need on their own, such as just using the training loop, but the documentation needs to be improved showing better examples of this.
    • The error messages are sometimes too long and confusing, and this needs to be fixed.
  • The community contribution process needs work, to take best advantage of the amazing community we have here.


With the fastai2, course, and book releases coming up soon, it seems like a good time to provide an update on progress and plans.

In the last few months, we’ve finished writing a 600-page book, and recording an 8-lesson course based on the book. We have tried to avoid making significant changes to the fastai2 library during this time, since we wanted to ensure that all the features that were already there were stable and well tested. We’ve been adding a lot of tutorials and improving the documentation of fastai2 during this time, as well as squashing bugs.

The planned release date for the book is August 4th. We’ll probably release the new course (v4) and fastai2 at the same time, a couple of weeks before the book.

As you probably know, @sgugger has recently moved to Hugging Face. He has been an amazing contributor to everything that fast.ai has worked on during his time here, and is widely admired in the community. Whilst he has been working full-time on fast.ai, for the last few months I’ve been spending much of my time on Masks4All - this has meant much less time on fast.ai for me recently, but I did feel that I should prioritize something that had such immediate and significant impacts on global health.

The good news is that the Masks4All project has been successful, and >95% of the world’s population now lives in regions that recommend our require masks. With the completion of Masks4All, the book, and the course, over the next couple of months, I’ll be able to move my focus back to the fastai2 library, which I’ll be maintaining with the help of the wonderful community here, and @sgugger will continue to be involved too.

fast.ai, and particularly the fastai library, is my passion. It’s what I want to keep doing for the rest of my life - or until we totally solve the problem of making AI accessible to everyone! Whilst other smaller projects will come and go, fastai is the thing that will remain my primary focus.

Since I started the first version fastai back when PyTorch was first released, quite a few other libraries have popped up that handle various pieces of what fastai handles, such as training loop libraries and data augmentation libraries. This is a good thing! More options mean that more parts of the innovation space are being explored, and open source libraries can take advantage of ideas that work out well in other projects.

Having said that, I don’t think that projects that only tackle subsets of the functionality needed to prepare data and train and host a model can ever achieve what we’re aiming for with fastai. For instance, how do you save a model in a way that the data processing requirements for the model to work are included, if data processing isn’t part of the library you used to build the model? How do you show predictions and model errors if your library doesn’t know how to decode the tensors for display purposes (which means needing to know the data processing used to create them)? How do you create an application that needs little or no code to solve real practical problems if your library only handles a subset of the functionality required?

I hope that one day we’ll be able to build “no code” applications for practical deep learning with fastai2, and this isn’t possible if the end-to-end process isn’t handled by the underlying library.

One of the benefits of building fastai2 as a complete solution using a layered API, is that all the pieces work in a decoupled way, but are very extensible and flexible. This is important in practice for both practitioners and researchers. For instance, a training loop without a complete callback system doesn’t allow users to fit together a solution for their needs that plugs together existing callbacks that combine to do whatever they need. Instead, each piece of functionality has to be added to the underlying library, or added using some limited set of hook points that are provided. It’s important to us that users can grab whatever functionality they need from callbacks, and expect any combination of them to “just work”.

Having said that, often people really do just want a training loop, or some data processing, or to serve a pretrained model, and so forth. Currently, our documentation doesn’t do a great job showing how to do these things on their own, which can give the impression that, for instance, people need to understand Transforms and Pipelines and type dispatch and so forth before they can implement a training loop. That’s certainly not the case, and we need to do a better job of demonstrating that. I’ll be building more complete examples based on Kaggle competitions and other datasets, focusing on simplicity and demonstrating how to use subsets of fastai2 functionality both on their own, and in conjunction with other libraries, such as DALI and albumentations.

One area where fastai2 is currently wanting is the error messages when things don’t work right. Sometimes the stack traces are just too long, and that makes it hard to debug. That’s something that I’ll be focusing on in the coming months, because clear and understandable errors and stack traces are important for the developer experience.

Another area that we need to improve is more clearly explaining how to contribute to the library, and what contributions would be most helpful. Unlike some open source projects, fastai doesn’t have an explicit goal of maximizing the number of contributors. I’ve noticed that projects with many contributors can over time become burdened by technical debt and community management overhead. Instead, our goal is to maximize quality of contributions. We could certainly be doing a much better job of explaining how to contribute most effectively, and better working with the many brilliant community members here to develop a more clear and scalable approach to community contributions.

Thanks to you all for everything you’ve done for fast.ai and the community. Every time you answer a fellow community member’s question, share your project or writing, and everything else you do to contribute, you help to make this the most helpful, kind, practical, and inspiring place for everyone involved in deep learning! :smiley:

If you have any questions, please ask!


Thanks very much for this update Jeremy!

I’m one of the folks who is both deeply invested in the library and community, and also, a bit nervous about it’s future (you can read about both my concerns and those of other fastai devs here).

I don’t so much as have a bunch of questions insomuch as recommendations based on my 20+ years in the software development game and 3 years (I think) with fastai. Would love to get your thoughts.

1.Establish a set of core maintainers for the library who define the vision, the API, and work together to approve and more PRs into the fold.

Having you gone with masks and then Sylvain leaving for huggingface is reason enough to see the benefits of having more core contributors, but there are others. For instance, I know it’s gotta be stressful as hell trying to build this thing with just yourself and one other person … and it may even slow down its progress. And while I know that this is your baby, there are a bunch of us who share your vision and would love to be a part of this. I think this is a net positive for the library, for you, and for the entire community.

Imagine instead of having to bear the burden of this yourself, there was a team of folks who could shoulder this burden alongside you and work with you to shape the library into something durable. A team of folks that are opinionated, willing to argue, willing to debate, willing to work with one another to define, build, and support the library. Its as much a win-win as I think you can get.

2.Adopt a more traditional coding standard.

I hope you’re not rolling your eyes on this one :slight_smile: I have to mention it because it’s probably the one thing that bothers me about the framework the most. The dense, pack as much as you can onto a single line, coding style does not follow any recommended practices I’ve every been exposed too … and I’ve coded in quite a few languages. Thus, I would recommend moving towards a more readable, standard while not too restrictive coding style. I think this would go a long way in gathering adoption from the software development community you want to help grow the library.

3.Reduce the amount of indirection in the library and the resulting cryptic error messages that usually only Sylvain can explain/resolve.

When Sylvain left my first thought was, “Oh f**k, who’s going to interpret the error messages now?” (that is when folks include them). You mentioned this above so I won’t belabor the point.

4. More integration with SOTA libraries like huggingface, Ross Whitman’s plethora of pre-trained models, GBDTs like xgboost/catboost/etc…

I like ULMFiT and am using it professionally, but w/r/t nlp, there are so many things you can do with huggingface transformers that I know myself, and others here, have started creating our own libraries to bridge the gap (e.g., blurr is a library that I created for this exact purpose). I think these things need to be included in the library so folks don’t have to go searching for them in (and maybe not finding them) when they are such common integration points to use in their respective domain.

What libraries would fastai integrate with? I think that would be the purview of the core contributor team (see #1 above). Huggingface is just an example of one I think must be part of that integration package.

In summary, everything I’ve really learned about deep learning (well almost everything), I’ve learned here from you. This is your baby and ultimately your call on what you want to do going forward. I hear your vision … I share it … and I think there is a lot to gain by building a team around it, both for “it”, for you, for your family, and for you to keep doing the things you love and for which many of us here have benefited from.



To add onto this, I know that it was in the works for an extension guide, which may be a good idea but I worry about having too many. If I understand your thoughts behind the library here (you can disagree if wrong :slight_smile: ) fastai is kinda like fastcore, or this is personally how I see it. A library base built from another library base. That’s not exactly what I think I’d like to aim for, because having things centralized would be better. Should there be a good vetting process first, through a few senior members who all agree it’s valuable? Yes. But also there’s already been so many developments just from peoples extensions, such as the blur library and I (and a few others) have made integrations with Ross’ libraries a few different ways, one of which requires zero major code changes and instead just one function to call the actual pretrained model. I feel both of those extensions should be part of a main library focused on being SOTA but also approachable.

I can also understand why you’ve wanted to keep it simple for right now, due to the paper and the book. However long term I think it would make the library so much more valuable having them brought in in-house. One option I see is potentially have people develop these sub-libraries, then if this set of devs (or the public) agree it’s a valuable integration then a merge should be made. That’s my $.02 on the idea and how it could work Others can absolutely voice in how they see too, I am still in school and have plenty to learn on how open source in reality works outside this library :slight_smile:


I was thinking that if possible try something like rust does, in the error it shows the diagnostic and the ERRNO, but if you provide a little extra flag to the compiler, it will show a description of the ERRNO with the OK and the common error as example.

It would be nice if some “common errors”.

Example: https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=c73e247e2be4b50c4267a54c1975e905 the link to the error is what you will get on console with rustc --explain E0277 https://doc.rust-lang.org/stable/error-index.html#E0277


:clap: I have been exploring the fastai (v1.0) library in the last 3 months, and I wanted to let you know how much I appreciate your work. I’m very curious about the improvements fastai2 will bring about, I could not follow the development of the new library so I am not aware of the changes that have been introduced.
Do you think migrating old code written with v1 to fastai2 will imply considerable work?

1 Like

Thank you for the update Jeremy, I was really concerned about using fastai2, thanks for the clarification. fastai course is literally the only reason I got into Deep learning, I owe to this amazing course and you for publishing it for free.

As you said documentations needs to be improved, currently I don’t know much about how fastai works internally, Once I figure that out I will try to contribute to the repo, meanwhile I will also try to answer people’s questions on the forum if I know the answer.

My suggestions:
As @wgpubs said I also think integration with huggingface is a must, to add to it I also think fastai needs a native object detection support.

I really enjoyed reading the fastbook, It explained lot of stuffs like Transforms, Pipeline and Tfmdlists clearly, We need chapters like those as documentations for all the stuffs happening inside fastai2.


This is what I’m saying as well.

In order to democratize deep learning, the core deep learning tasks should be included in the library. Insofar as NLP is concerned, there is no question the huggingface provides tremendous value in terms of using SOTA transformer models for the myriad of tasks folks doing NLP want to do. Thus, its a prime candidate for inclusion into the library imo.

That’s why you need a team of core contributors to debate and determine what should be included. More fringe functionality is probably better relegated to referenced 3rd party libraries built on top of fastai.


Yes, we’ll be doing more integration with 3rd party libs like Ross’ and Hugging Face. And reducing redirection or finding other ways to improve error messages.

I won’t be changing the coding style I use, or having a committee set the direction or make decisions around the library - it’s an opinionated project and opinionated projects work best with a BDFL, in my opinion. Software managed by committee tends to stultify. But that still leaves lots of room to better harness this great community.


The library really needs more core developers. Not only to implement new features, but with a deep understanding of the internals they can help users when they face problems. Right now we have an issue while exporting a Learner on the audio extension that is opened for over a month because we just hit a wall with fastai2 and no one knows what is happening, even after posting on the forums for help.

Also, the pytorch ecosystem is huge, and i would love to see the library trying to embrace it more instead of trying to replicate everything on it’s own way. For example, if I have a vanilla pytorch DataLoader with a custom collate_fn and batch_sampler, and want to use it with the fastai training loop, I have no option instead of rewriting it because they’re not compatible. I consider this kinds of breaking changes with the library you build on top of to be huge design flaws that are holding fastai2 back.

I have lots of small problems with the library (coding style included), but the foundations are solid and not present anywhere else on the python ecosystem, so I’ll keep working on fastai2_audio as my way to help the community.


I would argue that having you off working on masks4all and Sylvain’s departure, has also had a stulifying affect … both on the library and community.

I hate this question higher ups have posed to me, but its a good question: What happens to your project if something happens to you? What happens to your efforts to democratize AI if the only person framing and maintaining the library disappears?

I think there is room to have a set of core contributors where you can still maintain control, so I don’t think its this-or-that request, nor something that would stall progress. If anything, it would mean that there would be people who’d respond to PRs, respond to the forum, and people you could bounce ideas off that are experienced, informed, and invested.


This was kind of the role Sylvain had when he was around. So is the core question whether or not there will be a replacement for Sylvain’s position?


If another Sylvain comes around, absolutely! :slight_smile:


Yes totally agree, and that’s something I plan to work on - it’s the kind of thing I had in mind when I said “better working with the many brilliant community members here to develop a more clear and scalable approach to community contributions”.

It’s a fair question! :slight_smile: I am not and won’t be the only person maintaining the library, but I will be continuing to set its overall direction. If I disappear, it’s the same thing that would happen in the case of Linux or Python (until recently), where there is one person setting direction. In Python’s case, in fact, that happened, when Guido stepped down as BDFL. The community came together and figured out a new process and structure.

I’m not the only admin of the fastai repos, so if I disappear, there’s no logistical problem there. Although even if there was, it’s simple enough to fork, of course.


@lgvaz and me had a hard time trying to fit DataBlock API for Instance Segmentation. In addition, in this problem Learner doesn’t work either. However, ir can be solved by inheritance

I think that would be nice to have a set of learners for different tasks like semantic segmentation, instance segmentation, object detection, salient object detection…

To these learners you can pass the model architecture that you want. So, adding more models is easy.

It would be nice to see PyTorch additions like TorchScript, Quantization and Torch TPU support with xla, supported in FastAI2

1 Like

Would it be possible to get small contained beginner issues to work on? I have tried to contribute in the past, but definitely failed mostly by questioning the value that I was providing.

Actually if I could get one now I’ll just start working on it this week.

I have been using Fastai2 since October(also watched the code alongs/part1). The contribution process I am unfamiliar with, and most afraid of that portion.


@WaterKnight What would you envision TorchScript/fastai2 integration to be used for?

It seems that with the direct PyTorch models, post-training quantization shouldn’t be too hard to use directly with the PyTorch interface. Training-aware quantization would likely need an interface in fastai2 but would probably be harder to implement as well. However, I am not too familiar with the interface.

PyTorch TPU/XLA is a little bit more difficult and I had attempted fastai v1 integration in Nov-Dec but haven’t had much time with fastai2 integration (did some preliminary experiments), and I fully intend to investigate this during the summer. I will say this is much harder and some careful decisions about a separate TPU training loop might have to be made.

1 Like

Feeling positive about the discussion here so far, its reassuring to hear the plans being discussed :hugs:

Adding tags for issues (like “good first issue”, “help requested”, “nlp”, “learner”, “dataloader” etc) would be a good start to encourage people to browse issues and jump in. Here on the forums I’m more likely to answer NLP-related threads as thats what I’m more comfortable answering, same would apply for github issues as I’m (slightly) more likely to be able to contribute.

If there was 1 feature I could add tomorrow to the library it would be TPU support, mostly because I think this is likely something that is holding back adoption by the Kaggle community, where it seems like more and more competitions now come with TPUs. Also given the amazing high-performance work that has been done with fastai in the past (DAWNBench and Imagenet in 18 minutes)

“BDFL”, the world needs a few of these right now :sweat_smile:


I dont think that it is backward compatible (dont know if the names of transforms and things like that are keeped), als I could not pintpoint which concepts where reused from one version to the other.

So yeah!, it would be nice to have a migration guide from fastai 1 to fastai 2 also as part of the docs or a wiki thread?

1 Like

I think the main difficulty is how large of a scope that would encompass, which is why Jeremy and Sylvain redid the part 1 notebooks in the fastai2 repo (see under course/), and why I made my WWF2 (it was meant to be that). I’d be open to ideas of how to frame that and will happily help out :slight_smile:

(since they’re not backwards compatible)


In the past I’ve generally thought that the forum is a better place to discuss issues, but perhaps now that so many people are familiar with GitHub, we should consider taking more advantage of it. I’m certainly open to that.