I’m curious if anyone has any experience in using any automated tests for their ML projects. During Jeremy’s lecture last night for Lesson 8 he had mentioned how challenging it can be to work on a deep learning project because of the lack of feedback, it seems like this could be an opportunity for tests to provide at least some feedback that you’re on the right path… or at least to double check that you don’t have any obvious/subtle bugs in your implementation that are chipping away at your accuracy.
It’s likely unreasonable to have an all-encompassing test like “my model should find the couch and draw a square around it”, but it could be super helpful to double check that a helper function you define in the notebook does the right thing for the expected input and returns the correct shape in the output. That said, Jupyter is pretty awesome and lessens the need for tests since it makes it so easy to just poke a function and see what it does!
I’m happy to explore this idea myself (hopefully I can turn it into a blog article later!) but I also would welcome anyone else’s experience or suggestions!