The vision of this study group is to start with the top-level DataBlocks API and unravel each of the layers one by one until we hit the lowest layers.

So far we’ve looked at TfmLists, Datasets, Dataloaders, TfmdDl, Pipelines and also implemented a new CallBack. The videos are shared below.

Next upcoming zoom call: Saturday 7:30 AM IST Link-to be added.

This venture will serve three main purposes:

  1. We experiment with values other than defaults including different schedulers, optimisers, model architectures -and get an intuition about things that work and things that don’t work
  2. Get the capability to implement new research papers and ideas using the fastai library
  3. Learn some excellent software engineering tips and tricks from the fastai source code and take those ideas back to our workplaces/Kaggle etc

We will further be working for the community and releasing all videos as a group project such that those joining in the future and wanting to learn about code and underneath layers can do so by simply reading the blogs post/project summary/watching the videos.

Please join us in our journey into the fastai lower-level layers :slight_smile:

You don’t need to be an expert in Python and Deep Learning, we definitely aren’t. But, wish to be.

Code Walkthrus:

  1. Data.External : (looks at untar_data, download_url, download_data, URLs and Config class

  2. LESSON-1: Walkthrough of fastai V2 lesson-1 by @muellerzr

  3. Five ways to debug fastai Different approaches to explore and step through the code by @slawekbiel

  4. The DataLoader, commented: Link to forum post by @akashpalrecha

  5. DataBlock Overview, Categorize block and CategoryMap

  6. Datasets, Dataloaders and TfmdDL source code walkthru

  7. Complete Pipelines and detailed Datasets

  8. TfmdLists complete walkthrough

Schedule for SG:

Week1: Study PETS tutorial and write a blog post about it with personal touches and twists. Also, participate in FastGarden using Fastai V2.

This is the central blog based of fastpages where you can contribute: Github Repo, Blog
Make sure to upload your post to your own blog first, and then to the above repo. You are the author and you deserve credit first!


@muellerzr: fastblog

@arora_aman Making commits each day, towards a better future



@arora_aman this is a great coincidence (sort-of)
I’m very much familiar with FastAI V1’s source code. I also took a peek at all the notebooks for this year’s course and felt that the best use of my time would be to take every notebook, and for each “abstract” block of code where you go from downloading a dataset to having trained the first model in just a few lines I’ll do the the following:

  1. Follow each function call through the source code
  2. Understand the control-flow, and write comments where needed to make things explicit
  3. Hopefully, write a blog post explaining this (for advanced users, of course)

I thought that this would be a good way to “soft-document” the code, build a practical understanding of the most immediately useful parts of the source-code, and in general just get more comfortable with the library to be able to customize it to our own needs.

Anyways, this was what I was thinking this morning.
What exactly do you have in mind? When you say you want to beat the defaults, do you mean that you want to beat the metrics of the models we train using fastai2 or do you also want to beat the speed at which things happen? (data loading, transformations, etc.)


Wow, looks like we’re headed in the same direction and I think we should definitely collaborate. I am of the same opinion that looking into the each of the functions would be a great way to learn about thev2 source code and will definitely help us in the future should we need to customise the code.

In fact, I started with untar_data and you’d be surprised to know the amount of classes and function calls that are made to make everything work for us so easily.

I think we should start by looking at different sections of the code, write about it and join it together as one massive blog post.

I’m sure in the journey of doing this, we’ll actually be soft documenting the whole library and find bugs and contribute to the library and the community in a very healthy manner.

As for “beating the defaults” initially I want to start with beating the accuracy metric at the pets dataset. Tbh, this is more About being able to start experimenting with different things to develop a more thorough than to just beat the metric. I really want to understand deep learning this year and part of it is to be able to reimplement research papers and experiment with different combinations of schedulers/optimisers etc. I don’t know much about speed optimisation but very willing to join hands with you and learn more about it.


I think eventually by the time we finish, we could also take inspiration from Kaggle and implement different research papers and try them out. Maybe just maybe there might be something out there that might work better. Even if not, then we would know why defaults are what they are and would have developed a very thorough understanding of “what works” and “why”.

1 Like

Wow! This is something I have been looking for personally in order to get a better understanding of the code used in the higher-level apis and the nuances taken in training in general. The idea of falling in love with software engineering as mentioned by Jeremy has rung very true with me from last year’s course and I would love to hone my skills in SE as well as gain a better understanding of the architectures and training processes in the way.

I still consider myself very much a beginner in the practical aspects in DL but hoping to change that this time by performing more experimentation with the notebooks provided and peeking under the hood to gain more intuition! Hopefully, this study group might help guide people like me in the right direction! :upside_down_face:

1 Like

Wonderful idea! My own 0.02$ on how to go about this, I’d just walk through the source code notebooks, not the .py files. It’s more efficient to do so, and helpful to learn how to take the location from ?? and point to the exact notebook it’s running off of :slight_smile:


Okay, this is a great idea! I’m very curious to see how Fastai V2 was implemented and how things are the way they are.
I think @muellerzr idea is grate, Fastai V2 was developed with nbdev and we should word our way from the source notebooks, The .py files are just automatic exports.
When do you guys plan to start and at what time?

1 Like

Doing this is sometimes a lot of fun. For me, going down the source code path for fastai’s unet_learner (V1) proved to be quite interesting.
I guess we should probably see how to setup a fork of the library where we do not change the source code, but keep collaborating and adding comments on new lines that explain the functionality and any insights discovered. We should do this in a way so that it becomes easy to keep the fork in sync with the main repo and add our own comments at the same time.
I’m thinking it might just be easier to edit the source code and export those changes back to the notebooks to avoid merge conflicts, etc.

In the end, maybe we can write posts explaining in detail how a lot of the inner functionality can be used to rebuild things.
I wrote a blog post for matplotlib that attempted something similar with the Figures, Axes and GridSpec objects. We could either go for something like that, or maybe something else if others have recommendations.

EDIT: This is the blog post:


Sound like a great idea!
There is indeed a way we have to find to be able to easily comment the code and still be able to update the library.

1 Like

The heading “Source code study” makes me think about the 10 code walkthru videos.

Is “how this exact control flow is” more important than the “why”, or the rationale behind the design? Do you intend to write a user manual, or a “design and implementation of fastai v2”, or both?

How is one gigantic document of implementations going to inspire others productively? Why so ambitious about cramping everything into a single document, I can’t help wonder ?

In one of Jeremy’s interviews, he said he hoped in 5 years all these will be thrown away and things (training ai model) can be done much easier. Not to mention the short time-span on focus these days people have, searching for long post can be quite a turn off.

Besides, v2 is still evolving, documentation outside of the code base risks become become obsolete before the monumental single-volume is complete, and up to date.

So what could we do? I have a couple of thoughts.

Isn’t the goal of nbdev is to have integrated documentations close to the code? IMHO, to document such implementation detail or choice like “why 0.4 is used here”, if it ever needs explaining, should go hand in hand with the functions, in the library notebooks source code.

The second thought are more around the true power of fastai v2, as I glimpsed so far. They deserve as much learning and appreciations, if not more. Consider this statement:

fastai v2 is a flexible, highly customizable data processing pipeline,
and many stages of which involve matrix calculations on tensors.

— agree? I don’t see the word about artificial intelligence in it, hence its true potential is beyond that. AI model training happens to be the original target application. Here are the other significant dimensions:

  • its software architecture and design rationale, a result from the heavily iterative refactoring approach (and labor of love).

  • functional programming styles and idioms

  • the use of Python’s dynamic nature, e.g. dunder hacks and the use of type annotations to achieve multiple dispatch.

And to help others understand and learn about the “source code of fastai v2”, how about the top-down approach like this Part-1 course — build small, manageable examples, tools, applications on and using parts the library, possibly outside of the AI domain.

Everyone gets to learn from the best practice and wisdom in AI model training, in software engineering, and catapult the python skills to an uncommonly advanced level, and in a top down approach, using examples, not just dry commentary of the implementation.
The community can contribute snippets of short and sweet examples that are easier to write, and easier to grasp and revisit. A single monolithic long blog post is likely to be less digestible (if at all).

Consider the “Cookbook” series by O’Reilly.

A cookbook of fun recipes for the fastai v2 library at different levels, some maybe AI related, some may be about functional programming + OO wizardry in Python, and some are general purpose data processing pipeline application leveraging the Pipeline and Transforms and Cuda, may draw more interest from audiences and contributors of different background. They may not be the original goal of fastai v2, but from the 10 walkthrough videos, I feel they are incidental but significant side benefits.

Maybe I’m thinking too wide and wild? LOL :sweat_smile:


Agree. nbdev will make software engineering uncool but great again.

I’ll actually disagree with you respectively :slight_smile: , while yes, it’s not exactly the ai portion specifically, understanding how the source code works underneath has many benefits directly affecting such ideas:

  • Knowing how it works allows for an easier time with custom implementations
  • Understanding the why it works and where it came from (to be the default) understands their train of thought when building the library
  • And if nothing else, it gets you comfortable when you’re going through and debugging your own issues in the library

True! I would add that it has a lot of software engineering practices that are missing from just about every other DL framework.
If you’ve only done python or a few languages, it’s a great way to expose yourself to different/better ways of thinking about data, objects, and their relationships.
Plus the mix of flexibility and expressibility means that, once you get the basics of it, it’s like the ideas just start flowing. Feels like a domino effect.

This is an excellent idea! I’m watching this for updates.

On the benefits you mentioned, do them by all means, we are in violent agreement of their benefits. I myself, like many others, had to dive deep into v1 to understand the flow.

That means I might have grossly miscommunicated.

I’m concerned about the goal of “a single massively long post” on just the current implementation. I’m more a proponent of learning by short and sweet examples in notebooks, allow people to tweak and break things in manageable chunks, than reading dry, long commentaries on “this is how these 10 things work, in 10 paragraphs”. One cannot learn how to fly a plane by just reading the thick flight manual.


I understand your worry, however the library will not be majorly changing much from here on out (I’ve been following it for ~6 months now), so the main ideas will absolutely still be applicable, what will change could be small syntax bits (but even then it’s highly unlikely). Also upon rereading, we are in agreement, I believe I may have misread too, my apologies :slight_smile:


A massive blog post is probably not a good idea simply because it’ll be wayy too long. We can have detailed “comfortably” long blog posts on specific modules of the library.


Honestly, since we’re exploring each notebook, why not one on each notebook? (Or family of notebooks) The authorship would be split up but it seems doable. Also if you can’t tell, I love this idea and want to join :smiley:

1 Like

We’re in the same boat friend :slight_smile:

Thanks @muellerzr! Yes, definitely easier to look at source code through the notebooks. But sometimes I also find vim easier when searching for a function across different applications. For example when looking for data_augmentation and untar_data functions, because they are in different nbs, in this case, I personally find using vim a little bit better.

I think 10am PT would be a good start for the general community. While it would be 3am AEST, but I am happy to wake up early if it suits the majority. I think we should all jump on a call to decide the specifics.

Excellent idea!

Yes, the source code videos were the inspiration and this study group will build on top of those to also be able to experiment.

Neither, just simply learn good Software Engineering practices and get a better intuition of “what works” and what doesn’t work in deep learning.

I am sorry if I wrote a post that it made it look like the idea was to write one gigantic document that explains everything. I merely want to start with lesson-1 (6 lines) and write a piece that explains each of those 6 lines to begin with.

Then we could write follow up blogs to experiment different parameters or layers of the fastai api. This is more like approaching “why does this work” really? Why is learning rate 1e-3 a good learning rate and why didn’t 1e-2 work too well?


So if I’m understanding this right, we go through each lesson notebook and look at each line and pin it back to explain the source code?

1 Like