Update on fastai2 progress and next steps

Thanks very much for this update Jeremy!

I’m one of the folks who is both deeply invested in the library and community, and also, a bit nervous about it’s future (you can read about both my concerns and those of other fastai devs here).

I don’t so much as have a bunch of questions insomuch as recommendations based on my 20+ years in the software development game and 3 years (I think) with fastai. Would love to get your thoughts.

1.Establish a set of core maintainers for the library who define the vision, the API, and work together to approve and more PRs into the fold.

Having you gone with masks and then Sylvain leaving for huggingface is reason enough to see the benefits of having more core contributors, but there are others. For instance, I know it’s gotta be stressful as hell trying to build this thing with just yourself and one other person … and it may even slow down its progress. And while I know that this is your baby, there are a bunch of us who share your vision and would love to be a part of this. I think this is a net positive for the library, for you, and for the entire community.

Imagine instead of having to bear the burden of this yourself, there was a team of folks who could shoulder this burden alongside you and work with you to shape the library into something durable. A team of folks that are opinionated, willing to argue, willing to debate, willing to work with one another to define, build, and support the library. Its as much a win-win as I think you can get.

2.Adopt a more traditional coding standard.

I hope you’re not rolling your eyes on this one :slight_smile: I have to mention it because it’s probably the one thing that bothers me about the framework the most. The dense, pack as much as you can onto a single line, coding style does not follow any recommended practices I’ve every been exposed too … and I’ve coded in quite a few languages. Thus, I would recommend moving towards a more readable, standard while not too restrictive coding style. I think this would go a long way in gathering adoption from the software development community you want to help grow the library.

3.Reduce the amount of indirection in the library and the resulting cryptic error messages that usually only Sylvain can explain/resolve.

When Sylvain left my first thought was, “Oh f**k, who’s going to interpret the error messages now?” (that is when folks include them). You mentioned this above so I won’t belabor the point.

4. More integration with SOTA libraries like huggingface, Ross Whitman’s plethora of pre-trained models, GBDTs like xgboost/catboost/etc…

I like ULMFiT and am using it professionally, but w/r/t nlp, there are so many things you can do with huggingface transformers that I know myself, and others here, have started creating our own libraries to bridge the gap (e.g., blurr is a library that I created for this exact purpose). I think these things need to be included in the library so folks don’t have to go searching for them in (and maybe not finding them) when they are such common integration points to use in their respective domain.

What libraries would fastai integrate with? I think that would be the purview of the core contributor team (see #1 above). Huggingface is just an example of one I think must be part of that integration package.

In summary, everything I’ve really learned about deep learning (well almost everything), I’ve learned here from you. This is your baby and ultimately your call on what you want to do going forward. I hear your vision … I share it … and I think there is a lot to gain by building a team around it, both for “it”, for you, for your family, and for you to keep doing the things you love and for which many of us here have benefited from.

-Wayde

29 Likes