Plain PyTorch implementation of notebooks

I am thinking to create a plain PyTorch implementation of notebooks.


  • is the best library out there now
  • It decides a lot of good defaults for us, I am sure Jeremy will introduce us to the internals of the library as we progress in the course, still implementing them from scratch will improve our understanding
  • It will help us to dig deeper inside the library and we can probably take chunks out and explain it to others in the forum
  • There are times when we want to take a specific feature of and use it on some other code or port some code to, this will help in that process as well

What next?

  • I am not an expert in PyTorch or Differential Programming
  • I will try and implement the notebooks in plain PyTorch as much as possible
  • I would need help from others in the forum who understand programming better than me

Is anyone interested to take this up, maybe we could form a group and work together?


I am interested in the aspect of documenting the internals of the v1 library. However, I cannot commit. Our virtual study group for Asia (Singapore + China + Malaysia + India) is planning to port the ML course to the new version of fastai library. It’s a good exercise that I think will bring benefits when they start the porting work. May be they are interested. I can help you ask around.

I think, this will be very useful if we can turn this into a simple fastai library documentation geared towards the advanced group (i.e. developers). I am thinking like a proper ‘mini docs’ or cheat sheet for documenting deeper into the internals of fastai programming interfaces, core modules, classes, functions, etc.


I am interested. I am also not an expert in Pytorch but I have a basic understanding of Pytorch. I agree it will help us gain a deeper understanding of the library and I’ll be happy to implement library in plain Pytorch.
Thanks for taking this initiative!

As the course progresses I think you will be given a lot of the tools needed to override and extend the fastai library, including writing CNNs and RNNs are a fairly low “mathematical” level. Pytorch is a very flexible library that is easily extendable on the fly, unlike the “compiled” version of other deep learning libraries. Once of the strengths of fastai/pytorch is that they are both easy to extend on the fly, so I am sure we will be going over how to implement modules in pytorch, and I wouldn’t be surprised if this included implementing things at the CUDA level.

The course is also already packed enough, implementing “everything” in pytorch would take a significant amount of work, so I would suggest going through

Also, I would definitely look at implementing things in pure pytorch as a “stretch” goal. So only implement “important” things, such as for CNNs: Convolution layers, RELU, maxpooling, back propagation. Though I am pretty sure these will all be covered in the lessons near the end of the course anyway.

I feel that implementing all of this would get in the way of simply running your own experiments are this point. Near the end of this course/beginning of part 2 is where understanding all of this would be more important, which is when I think we will be extending fastai to solve our own problems.


i am also interested in doing so. It would be a great learning experience. Count me in.

As a programmer, I’m really interested in understanding the internals of FastAI too. While I don’t have the bandwidth to help with recreating the course notebooks in PyTorch, I am currently doing a weekly live webinar where I walk through the Jupyter notebooks used during the implementation of FastAI V1

You can find it here (new video added every Saturday):

I would be happy to help with code review, idea discussion and clarification of doubts regarding FastAI internals.


@srmsoumya I am also a part of the team with Cedric and we are looking at porting the ml course into the new v1 libray. However I agree with @marii that it is a very time consuming activity. I have the desire as well from the point of understanding fastai and have been following @aakashns webinars on FastAI internals. I suggest that you look at those webinars and then look at a small scope to start with. The scope could be to do a proper documentation as @cedric suggested. Start with something small and then expand and that will be fulfilling as well as rewarding.


That’s a great idea and I’ve this in my TODO list.

Though I’m interested in this, I cannot commit to it full time but will be chipping in intermittently and will be involved in this activity.

@cedric It will help if you check with the folks in the group who will be interested for this as well.

Agreed on creating the mini-docs thing during the process as well.

Thanks @marii

I am basically looking to implement stuffs that are hidden beneath the library, to give an example:

  • 1 Cycle Policy
  • Image Augmentation
    There are clever little hacks inside the library which makes it perform slightly better than any other library out there. This might not be useful for beginners, but for folks who have taken this course last year, I think this will help improve their understanding of, PyTorch and DL in general.
1 Like

Looks great!

I was thinking of doing the same, but probably after this course end, since it’s already a handful to follow the course, work full time and try out something with my own datasets.
If you’ll start this project sooner, I’d like to follow and maybe join closer to the end of the live course.

That’s sounds interesting from an educational point of view.
I’ve started something similar in this repo:

It is just a bunch of PyTorch scripts, and a couple of classes. I wanted to implement something simple, like, fastai_core, which would include training loop and callbacks only. So we can use an arbitrary iterable, dataset, augmentation library, etc.

However, it is a quite time consuming task :smile: The fastai library includes a lot of very advanced stuff, especially if talking about RNNs and YOLO-like detectors which is not too simple to replicate. It is not too difficult to implement a basic training loop and callbacks (there are a lot of examples in PyTorch docs) but to get the same loss/accuracy values as fastai code probably is not that simple.

I think that a “bi-directed” approach to learning is a good thing. Like, when you do some real data science, competitions, data analysis, building apps, and solutions, use something proven and tested, extend the library, etc. But if you feel ready to dive deeper, then start implementing things from scratch, or at least with fewer levels of abstraction.

Anyway, would be glad to help with committing into fastai itself, or any other interesting initiative withing PyTorch ecosystem :wink:


@pnvijay is part of our Asia study group. I think Vijay will be the best person to check with for the porting ML course project. A few of us are also studying the ML course right now. We have @PegasusWithoutWinds and @Taka who I think will be interested in this as well.

IMO, a smarter way to spend time is pick a small piece (like the basic training loop) from the full fastai library and give your best attempt on implementing it from scratch using Python and PyTorch and write proper documentation along the way. I think this should be more manageable. This is even useful for beginners. It will help them internalize their learning.

Update 1:

To give an example of the kind of ‘mini-docs’ that I find it useful:


I think this is a nice idea and something I’m keen to do. I definitely agree on starting simple though, for example just understanding what is imported and what depends on what. Then as you build from a simple resnet34 you can compare impact on performance of each part as you add it.

Very interesting topic ! Count me on. I tried to replicate the tabular module with pure Python reverse tabular module. This module at this time I think is less challenging than other module (vision, text). Now, I am trying to dig deeper in vision and optimizer

@srmsoumya library is pretty much modular and you can use any piece of code to use it with your custom models. Jeremy will definitely explain some of the internals and would encourage us to understand the structure and choose a problem(classification,recommendation,LM,sentimental analysis etc.) and solve it with library.

I like this Idea. Its already done by jeremy. Basically the whole documentation at is generated from notebooks Link.
Every function,method is there. Have a look.

Glad you found it useful. When I was initially writing it, I really didn’t know where I was going, but I definitely learned a lot about where the loss function is determined in the new fastai library. I usually will start a new post whenever I find something that I don’t have a good grasp on and if I am able to solve it or ask a well thought-out question, I will post it.

I usually learn more when I am explaining what I learn and I have actually searched errors in the forums and ended up on posts that I started that I had forgotten about so they can actually save your future self some headache time too.


I have implemented SGDR and Snapshot Ensembling in PyTorch when I was doing one of my personal projects.
You can find the code here:
The code is documented and easy to understand.
Also, if anyone is interested in the project and wants to know more, I have written a blog. You can find the blog here:


I’m doing the same thing myself on different parts of the library. Like @noskill said, the official docs already have all the notebooks created and there are over 30 notebooks there. I think one way to be helpful to the docs at this point is to go through the notebooks and see if there is any error (I’m sure there will be).

I think at this point re-implementing in PyTorch is mostly for our own understanding, and it’s an absolutely necessary step if we want to get a solid grasp of these tools. One can always dig deeper than PyTorch if re-writing in PyTorch is not enough :wink: If we find something that’s missing from the library, we’ll do pull requests to add/update.

What do you think?

1 Like