Improving/Expanding Functional Tests

To clarify, I wasn’t suggesting to migrate tests to ipynb. I shared that for the situation where the code needs to be tested in that environment. For example, https://github.com/stas00/ipyexperiments/ can only be run in the ipython/jupyter env - so there is no choice but to test in that environment.

1 Like

understand, agree!. Will focus on these low level script asserts with above list for now. But I could well imagine one could come up with different layers of tests, like notebook tests, memory tests or more functionally orientated tests and so on …

1 Like

Here is a nice tool i have been using. Not suitable for automatic testin, but if you are suspicious about a piece of code then it is really usefull

> import tracemalloc
> tracemalloc.start()
> snapshot1 = tracemalloc.take_snapshot()
> 
> ---------your lines of code to be traced -------
> 
> snapshot2 = tracemalloc.take_snapshot()
> top_stats = snapshot2.compare_to(snapshot1, 'lineno')
> print(f"Top 10 of {len(top_stats)}")
> for stat in top_stats[:10]: print(stat)
2 Likes

@sgugger what would you think we create 2 additional methods in fakes.py called fake_text_data and fake_image_data. Then for e.g. fake_text_data would open and load a dummy file (like in test_text_data) and use that class to test one_item and other fastai functions that would not work with a TensorDataset. I.o.w. in test_basic_data I use fake_data, which uses TensorDataset to test e.g. one_batch. But other functions like one_item we test from test_text_data.

Or we find a way to create fake_data with not just TensorDataset but rather fast.ai functions - but not sure how - to do that.

I’ve changed the way fake data is created. It’s still synthetic data, since it’s there to test functions in callbacks or basic train facility, but it’s fully compatible with the fastai library.

1 Like

ok, thx! Will have a look latest after NYE to use for some testclasses.

FYI, I moved tests/fakes.py to tests/utils/fakes.py - and please refactor any reusable functions into test util modules as you write tests. Thank you.

1 Like

added new tests:

I’m not quite sure how to update the listing as it doesn’t indicate parts that need to be covered.

1 Like

great, thx ! Enough if you update here, have updated the list above and added your name (not ideal format for tracking, but good enough I guess)

I am working on more simple tests for the learner and hope to extend my PR soon … bloody day job eating up my time :wink:

1 Like

@sgugger would you think there is a meaningful, at least useful test case for testing the split function on a linear model (as given in fakes.py)? Am aware of convnet and unet usages for example.

Will check some other functions meanwhile, small (almost trivial) PR with some incremental improvements here:

Was thinking btw, if we have meaningful test cases we might want to paste them in the docs as examples also. Had looked here for example usages of split, hadn’t found them and upon searching the forum found image examples for split.

@stas any ideas, how if to test split one fake data in a use- / meaningful way?

FYI, added CaptureStdout context manager for a much more compact stdout capture and clearing:

1 Like

oh good, I will look at it. Quite some tests could work over screen output easily.

But here or there was a little reluctant to use it too much - imagine we change some trivial screen output in wording and the tests fail. So want to be a little careful, maybe we need a little strategy / best practices to ensure we do not add tests that break all the time.

Good stuff, thanks !

I don’t know, I have never used this.

Probably it will be easier to write meaningful test cases by prioritizing writing tests for bug reports, so then you always have a meaningful test case (assuming the report included enough of a setup to reproduce it). I’m not opposing your systematic approach, but perhaps a lot of those methods will almost never be used, so why not wait till someone asks about it, reports it not working, etc. and meanwhile focus on the small sub-set of the code that’s really important. i.e. following the 80-20% Pareto principal.

1 Like

fair point, yes some of these might not too much add value - I meanwhile follow more the docs simply to test what is described.

I was in no way implying that it should be used as much as possible, it was just a refactoring step (prompted by your own refactoring recommendation) and I was sharing what was refactored. That’s all.

1 Like

yes, get it! thx.

and thx for your tips and corrections @stas ! worthwhile mentioning here

1 Like

submitted a PR for inspecting and asserting over fit and fit_one_cycle, hope it makes sense.

All ears, if one can improve these tests here.

@sgugger

Btw, there seem to be same regression tests broken in vision, when running this locally with a fresh pull. Getting in trouble with a.o. these lines in test_vision_data.py

from fastai.vision.data import verify_image
import PIL
***#import responses***

test_verify_image

So, while the individual tests of my changes work and the checks of my above PR works an azure, locally having errors.

Not sure what it is, tried some simple de-comments of culprit lines -didn’t solve it. Happy to help, if you give me some pointers.

@stas FYI