This Topic is intended to surface gaps in automated test & coordinate work around that.
The goal is to add functionality tests, using coverage as an indicator for which areas haven’t been tested (and not to just try to increase coverage).
But actually, the goal is also you find tests that make sense for you, like in ‘it’s fun’ or ‘hey, I gonna learn sth’ So, take below list as a suggestion only and make sure you pick what works for you
First, make sure you can run the fastai
test suite and its individual tests: https://docs.fast.ai/dev/test.html#quick-guide It’s very easy to accomplish.
Important note on tests:
- they must be smart: improving coverage just for the sake of coverage isn’t a goal. We also run the full doc notebooks as a test before a release, which has probably a really good coverage, but it doesn’t necessarily catch subtle bugs
- they must be stylish: tests should abide by the style guide of fastai
- they must be fast: no downloading torchvision models, no training apart from the 4 current integration tests test_bla_train (one in vision, one in text, one in tabular and one in collab) or be marked as slow (but we’d prefer fast tests).
- most should be fake: other than the 4 integration tests in test_bla_train, we always use fake_learner in tests/fakes.py, where you also find methods for creating fake data
- they must be registered : with this_tests, use pytest -sv --testapireg to check on this and examples in test_train.py. Fine more details here: https://docs.fast.ai/dev/test.html#test-registry
The simplest way is for you to just submit PRs (fastai - git Notes) with new tests, following the guidelines here
Note, at the time you are reading this, the specific coverage %age in below table will likely be outdated. The suggested process, therefore, is to run re-run:
make coverage
or
coverage report -m
to create a html report / find specific lines and then update the numbers. Add your name & potentially update the column comments and the corresponding testclasses you want to work on. With some coordination, it should be possible several developers work on one class.
Find additional information on testing here:
- https://docs.fast.ai/dev/test.html#writing-tests
- run coverage reports: https://coverage.readthedocs.io/en/v4.5.x/
Writing tests is an excellent way for coders to understand a code base.
Thanks a lot for helping!
Incoming tasks to be sorted
- need some speed regression tests to make sure the core isn’t getting slower (timeit kind of tests)
Remark: there is a thread open for benchmarking: Benchmarking fastai - #2 . At some point later, these 2 efforts should get synced and the test effort get enriched by benchmarking and performance tests.
Sorted tasks
Note:
* Tasks are listed in order of priority, pls check with coordinators, when picking a task
* In case a task you like to work on is already picked, pls check with the person - help welcome
* each module should be tested in a test file name with a corresponding name: like test_vision_transform for vision.transform
Module | Test Classes | Status | Comments | Change Required | Developers |
---|---|---|---|---|---|
basic_train | test_basic_train | ongoing | … | fit done | ? |
basic_data | test_basic_data | Done (mostly ;-)) | see some TO DOS in code comments | follow example in test_callbacks_csv_logger | ? |
train | test_train.py | ongoing | … | fit_one_cycle done | ? |
tabular/data | test_tabular_data | Assigned | … | follow example in test_callbacks_csv_logger | ? |
layers | test_layers | ongoing | … | follow example in test_callbacks_csv_logger | ? |
callback | test_callback ( add test_callbacks_xxx as appropriate) | Done (Stas Bekman) | … | follow example in test_callbacks_csv_logger | Stas / ? |
… | tests/test_metrics.py | Assigned | Stas | ||
… | tests/test_lr_finder.py | Assigned | connected to test_train.py | ? | |
vision/gan.py | add | Assigned | … | add new test class | |
vision/models/unet.py | add | Assigned | … | Young | |
vision/learner.py | add? | Assigned | … | Young | |
/text/learner.py | test_text_data / test_text_train ? | Unassigned | corresponding test class? | ||
/callbacks/loss_metrics.py | test_callbacks_hooks / test_callback_fp16? | Unassigned | corresponding test class? test_callbacks_hooks or test_callback_fp16? | ||
/callbacks/general_sched.py | Unassigned | see fastai / callbacks / loss_metrics.py | |||
/text/transform.py | Unassigned | corresponding test class? | |||
/collab.py | test_collab_train? | Unassigned | corresponding test class, test_collab_train? | ||
/tabular/transform.py | test_tabular_transform | Unassigned | corresponding test class: test_tabular_transform but no class test_text_transform? | ||
/metrics.py | test_metrics | Unassigned | |||
/vision/data.py | test_vision_data | Unassigned | corresponding test class? | ||
/data_block.py | test_vision_data_block | Unassigned | what is the corresponding test class, should there be one dedicated or is this tested indirectly via other classes? | ||
/vision/transform.py | test_vision_transform | Unassigned | |||
/datasets.py | test_datasets | Unassigned | |||
/vision/image.py | test_vision_image | Unassigned | |||
/callbacks/hooks.py | test_callbacks_hooks | Unassigned | see fastai/callbacks / loss_metrics.py / fastai/callbacks/general_sched.py | ||
… | … | … | … | … | … |
vision/cyclegan.py | not ready | add new test class | |||
/text/qrnn/forget_mult.py | not ready | ||||
/text/qrnn/qrnn.py | not ready | ||||
/text / qrnn/qrnn1.py | not ready |