As @sgugger mentioned, the linked comparison biased. More importantly, the comparison is terrible and actually doesn’t understand what fastai is. Even when this was published (Aug. 2019), there are a lot of features that fastai has that weren’t “checked”.
Under High-performance Computing:
He missed:
- Accumulated Gradients
- Multi-GPU training
- Gradient clipping
- Tensorboard
- Print param summary, inputs, outputs summary
Under Debugging tools:
He missed:
- Built-in visualization
- Log GPU usage
- Overfit on small amount of data
- Validate on subset of data
Under Usability:
He missed:
- Checkpoint callback
- Early stopping Callback
Honestly, the rest of the issues are either already solved in fastai2, or were already very easy to do with a little bit of extra code (except for multi-cluster training).
PyTorch Lightning looks like a great library and it’s very interesting, but I am disappointed by the very unfair comparison made in that post. In fact, if this comparison is done fairly today, there probably isn’t much additional features in PyTorch Lightning than fastai2. Maybe maybe someone who has actually used PyTorch Lightning could share their thoughts as well.