I’m interested in writing more unit/integration tests for the V1 library and was asking @jeremy for some ideas and guideline. Creating a thread here so that it can be shared among folks with similar interests and encourage more discussion
I was thinking about how to improve the unit/integration tests as well. In particular one thing that came to my mind is to
- inherit from unittest.Testcase and to use self.assert instead of a plain vanilla assert, so that we benefit from the more verbose error messaging
Not sure if it is required as soon as fastai team uses
pytest library that makes a deep inspection of the code and improves default asserts making them very verbose. Also, from my point of view, it is much more “pythonic” and very flexible.
Thanks for the thread! Here is our docs on testing:
I’d suggest starting with something really small and simple. For instance, pick a little utility function in core.py or torch_core.py that does have tests, and see how that function is used in the library. Think about what edge cases might break it. Create a notebook for yourself to play around with the function - use different data types, etc. Maybe you’ll find some areas that it doesn’t work correctly.
When you’re ready, submit a PR with your tests - some of which might fail. For the failing ones, use pytest’s “skip” flag to indicate that they shouldn’t be run by CI yet (or fix the code that makes them fail!).
Also, if you see that the docs for the function are incomplete, also submit a PR to the fastai_docs repo to update the appropriate notebook in docs_src/. Here is the documentation for contributing to the docs:
Please let me know if you have any questions. There’s a lot to digest here, so don’t worry if it takes a while and a few rounds of questions!
Appreciation for the thread! I’m a newbie in AI and integration testing and look for any basic information about it (like this https://u-tor.com/topic/integration-testing). I started to write integration testing, but documents in a previous post don’t open!