@sgugger what would you think we create 2 additional methods in fakes.py called fake_text_data and fake_image_data. Then for e.g. fake_text_data would open and load a dummy file (like in test_text_data) and use that class to test one_item and other fastai functions that would not work with a TensorDataset. I.o.w. in test_basic_data I use fake_data, which uses TensorDataset to test e.g. one_batch. But other functions like one_item we test from test_text_data.
Or we find a way to create fake_data with not just TensorDataset but rather fast.ai functions - but not sure how - to do that.
I’ve changed the way fake data is created. It’s still synthetic data, since it’s there to test functions in callbacks or basic train facility, but it’s fully compatible with the fastai library.
@sgugger would you think there is a meaningful, at least useful test case for testing the split function on a linear model (as given in fakes.py)? Am aware of convnet and unet usages for example.
Will check some other functions meanwhile, small (almost trivial) PR with some incremental improvements here:
Was thinking btw, if we have meaningful test cases we might want to paste them in the docs as examples also. Had looked here for example usages of split, hadn’t found them and upon searching the forum found image examples for split.
oh good, I will look at it. Quite some tests could work over screen output easily.
But here or there was a little reluctant to use it too much - imagine we change some trivial screen output in wording and the tests fail. So want to be a little careful, maybe we need a little strategy / best practices to ensure we do not add tests that break all the time.
Probably it will be easier to write meaningful test cases by prioritizing writing tests for bug reports, so then you always have a meaningful test case (assuming the report included enough of a setup to reproduce it). I’m not opposing your systematic approach, but perhaps a lot of those methods will almost never be used, so why not wait till someone asks about it, reports it not working, etc. and meanwhile focus on the small sub-set of the code that’s really important. i.e. following the 80-20% Pareto principal.
I was in no way implying that it should be used as much as possible, it was just a refactoring step (prompted by your own refactoring recommendation) and I was sharing what was refactored. That’s all.
Btw, there seem to be same regression tests broken in vision, when running this locally with a fresh pull. Getting in trouble with a.o. these lines in test_vision_data.py
Please provide the output of the failing tests and any pertinent info that you think will help reproduce the problem. I have no problem running the test suite and neither azure CI. You can check how the CI is setup https://github.com/fastai/fastai/blob/master/azure-pipelines.yml (choose the the entry that’s similar to yours and compare with how yours is different).
ok, thx for the homework am pretty sure its a local problem and will check the yml setup. Let’s assume the issue is closed, otherwise would come up with an analysis here.
Do we have any testing-suitable datasets with variable image sizes? or perhaps it’d be handy to add an autogenerator in fakes.py? or perhaps taking MNIST_TINY and making a variable image size copy of it by random cropping it - might be handy for other testing? MNIST_TINY_VAR_SIZE?
I have a half-baked test for tests/test_vision_data.py which works, but needs to also test on variable image size, it’s really a resize/collate_fn test:
def test_from_name_re_resize(path, capsys):
fnames = get_files(path/'train', recurse=True)
pat = r'/([^/]+)\/\d+.png$'
# check 3 different size arg are supported and no warnings are issued
for size in [14, (14,14), (14,20)]:
data = ImageDataBunch.from_name_re(path, fnames, pat, ds_tfms=None, size=size)
captured = capsys.readouterr()
assert len(captured.err)==0, captured.err