I keep seeing pytorch-lightening coming up more and more in articles I read, demos, and tweets from folks I follow. I don’t know much about it yet or how it compares to fastai2 (the only thing I’ve been able to find comparing the frameworks is over a year old).
So I’m curious for those experienced with the later:
What are its pros and cons when compared to v2?
Has there been any integration with the two that have proved helpful? For example, can fastai2 be used to prepare DataLoaders objects that can be fed into the pytorch-lightening training pipeline?
What has your experience been in terms of training and deployment with the two?
As @sgugger mentioned, the linked comparison biased. More importantly, the comparison is terrible and actually doesn’t understand what fastai is. Even when this was published (Aug. 2019), there are a lot of features that fastai has that weren’t “checked”.
Under High-performance Computing:
Print param summary, inputs, outputs summary
Under Debugging tools:
Log GPU usage
Overfit on small amount of data
Validate on subset of data
Early stopping Callback
Honestly, the rest of the issues are either already solved in fastai2, or were already very easy to do with a little bit of extra code (except for multi-cluster training).
PyTorch Lightning looks like a great library and it’s very interesting, but I am disappointed by the very unfair comparison made in that post. In fact, if this comparison is done fairly today, there probably isn’t much additional features in PyTorch Lightning than fastai2. Maybe maybe someone who has actually used PyTorch Lightning could share their thoughts as well.
The summary, check learn.summary(), dblock.summary(). The rest are in callbacks (gradient accumulation, checkpoint, early stopping), logger is in there too in the other callback notebooks. And if you prefer an HTML, look at https://dev.fast.ai (the docs) and each notebook has been generated into a documentation webpage
Its been interesting to see the push Lightening has made recently, I think its partially related to people’s reluctance in spending time learning a whole new api vs dabbling with a more light weight one.
I haven’t used it, but from my (shallow) understanding, TPU support is the main thing it has that fastai v2 lacks (for now), for the rest I believe fastai has it more or less covered
Recently I have used fast v2 and pytorch-lightening in some small project. I find it that fastai2 is like a useful tools that you need to take time to learn it. And Pytorch-lightening is more like a way to organize your PyTorch code. If you want to start a model from scratch, I prefer fastai2 because it offers great tools to deal with dataset. If you want to modify others project for experiment, I prefer pytorch-lightening because I could have a better understanding about the project after move the project to pytorch-lightening.
The only thing that makes intermediate users prefer pytorch-lightning over fastai2 is pytorch-lightning is just plain python + pytorch, fastai2 comes with it’s own api design, and even after looking at the source code fastai2 still looks mysterious, fastbook has 1 chapter explaining the midlevel api, it would have been great if there are more chapters like it or similar resources explaining the api in detail.
Lightning is built for professional researchers using PyTorch.
fastAI is built for students for the fastAI course (with PyTorch).
Seems to trivialize fastai as a library fine for learning if you are taking the fastai course … but that’s about it. If you want to do real research, I guess you use PyTorch-Lightening or write everything out by hand??
I’d agree. I’ve done plenty of research with fastai (literally the last year and a half), and I also converted over a few professors at my university to use fastai instead of Keras for their research with their students
You can’t get impartial comparison from the person who built one of those libraries, which is why I’m personally refraining from commenting.
I’d be curious to have the feedback of @lgvaz since I think I saw on Twitter he used PyTorch Lightening for a project on object detection.
I am a core developer of the unofficial fastai2 audio extension and also have recently experimented with pytorch-lightning in one project, so I’ll try to do a unbiased comparation of the two libraries.
First of all, note that they have different purposes. Lightning is a lightweight library focused on the training loop, and tries to make the engineering aspects (like logging or distributed/TPU training) trivial while giving full flexibility in the way you write your research code. On the other hand, fastai is a powerful framework that tries to integrate best pratices on all of the aspects of deep learning training, from data loading to architectures and also the training loop.
Pros about fastai2:
Modern best pratices are constantly implemented
Datablock API is wonderful to load data
Huge variety of applications ready to be used
Cons about fastai2:
The library is strongly opinated on how things should behave, down to the level of changing how python works. This introduces a huge friction when you want to do something new or different.
It’s hard to integrate with other libraries in the pytorch ecosystem. More than once I’ve seen people reimplement code in fastai2 because it’s easier.
Error messages in fastai2 have really degraded from fastai1. Often they are a couple of pages long and hard to tell where the problem actually is.
Pros about pytorch-lightning:
No friction at all to use with other libraries. Just import them and do your thing, no need to use wrappers or rewrite code.
Automatic experiment management is great, you just run the code a bunch of times with different hyperparameters and fire tensorboard to easily compare results.
Larger community of core and active contributors
Plain pytorch, code is simple to understand and reason about
Cons about pytorch-lightning:
Mixed precision training is currently tied to NVIDIA apex (waiting on official torch.amp to stabilize)
To add to this, I remember someone mentioning his company was trying to decide to go pytorch or fastai v1 route. They ended up going with pytorch for easier maintainability (converting to fastai v2 down the road was an unknown). As said, each has pros and cons, and the user should figure out his/her needs.