Relationship between torchvision and fastai library

Hi all,

I’ve been trying to reproduce the dog-breeds competition code without using the fastai library (I couldn’t find the notebook and I hate not understanding the internals of a library so I made it self-assigned homework). To do this, I’ve been working through the pytorch tutorials and I’ve been finding that fastai has a lot of code which is similar in nature to torchvision code, but which seems to be independently written. So far I am seeing this with torchvision.transforms, torchvision.datasets.ImageFolder, and (in master) torch.optim.lr_scheduler.CosineAnnealingLR.

I know that fastai wrote a light critique of pytorch a while back, so I’m wondering which code came first, if there are subtle difference I should be careful of, and if there are any plans to submit code back to torchvision (for instance the above and ResNext models).

Thanks in advance!

1 Like

Fast.ai’s library builds upon PyTorch, including PyTorch’s torchvision library. You’ll see imports from torchvision in fastai modules. For unimported features, some are similar or rewrites, but others, like the learning rate finder, are unique to fastai, at least at this time.

Here’s the lesson 1 notebook: https://github.com/fastai/fastai/blob/master/courses/dl1/lesson1.ipynb

Also, I’m doing the same exercise. It’s been helpful.

1 Like

I think the only thing we use torchvision for is the pre-trained models. Our transforms are much faster. We wrote our cosine scheduler before pytorch did and ours has a lot more features (most importantly, it has restarts).

Replicating the fast.ai notebooks without using the fastai lib has been tried a few times before, and generally seems to be have been effective as a learning exercise, but I don’t think anyone has managed to replicate the results in the notebooks yet. There’s a lot of details to get right!

1 Like