This topic is for discussing and implementing a fastai benchmark, so that we have a way to detect regressions in the fastai performance. This project is not about comparing fastai to other frameworks.
The current stage of this dev project is discussing ways it could be implemented.
Here are some useful materials on this topic:
- Welcome to pytest-benchmark’s documentation! — pytest-benchmark 3.4.1 documentation
- https://pyperformance.readthedocs.io/ (compares different python versions/implementations)
- How to do performance micro benchmarks in Python - Peterbe.com
- GitHub - tsee/dumbbench: More reliable benchmarking without thinking && Your benchmarks suck! | Steffen Mueller [blogs.perl.org]
- My journey to stable benchmark, part 1, part 2 (deadcode) and part 3 (average)
- Tracemalloc: tracemalloc — Trace memory allocations — Python 3.10.7 documentation
- Pyflame: A Ptracing Profiler For Python — Pyflame 1.4.0 documentation
This is a wiki post, so please improve it if you have something to contribute.
At some point later, this effort should get synced with automated testing, see this thread: Improving/Expanding Functional Tests