will do, can you point out what matters so next time will do? Couldnt find changes in a quick look but I probably overlook sth
You mean sort_key=true in json.dump (think its there) or sth else?
As to do’s (looping @ashaw also and suggestion for action owner):
Andrew: commit fastai/test_apid_db.json to the repo so Andrews code can use it (?)
Andrew: add test_api_db.json into MANIFEST.in
Andrew: make sure function derives class.method automatically
Benedikt: make PRs for test _this after agreement on details and Andrew committed change
Benedikt: test_this() should be able to receive multiple entries in one call, so it should be *args
Benedikt: a bigger puzzle to figure out is that the only time the json file is written is when ‘pytest’ completes with 100% success. You already have the check to run it only for non-specific tests, this is the 2nd half of the setup.currently, if I run ‘make test’ and abort it we get {} in the json file - not what we want obviously as we have just lost all the data.
I think so, but it should be easy to see if it doesn’t.
2 more things that are still lacking (and perhaps are on your TODO, but they are in the spec):
test_this() should be able to receive multiple entries in one call, so it should be *args
a bigger puzzle to figure out is that the only time the json file is written is when ‘pytest’ completes with 100% success. You already have the check to run it only for non-specific tests, this is the 2nd half of the setup.
currently, if I run ‘make test’ and abort it we get {} in the json file - not what we want obviously as we have just lost all the data.
Please not yet, until the autogeneration of this file is sorted out (see last posts above). Currently it’d break the release process, because the latter relies on checking a clean git and currently make test will make it unstable if you remove it from ignores.
Until then Andrew can generate it by running ‘make test’.
Agh, we have another small complication pytest -k does only substring matching, so if we want to tell the user to run test test_foo and we have test_foo and test_foobar in test_x.py, it will run both. So either we need to use a very clunky:
pytest -k "test_foo and not test_foobar" tests/test_x.py
and add a whole bunch of code to “unmatch” things,
or we probably better switch to the node syntax, so in this case it’d be:
pytest tests/test_x.py::test_foo
I think the latter is a simpler and more robust solution. @ashaw, this is on your side - and thank you!
it also now makes it easy to have a whole bunch of them in one command:
Played around skeleton test generation just to see if it would work in practice.
I was able to completely generate this test file for nbtest from this notebook
Pretty fun/awesome/convenient, but not as useful as I’d hoped. Stas is spot on - doesn’t fix the need to reference other tests and jump to source code. Going to kill this feature for now. Don’t think the benefits outweigh the cost of learning a new workflow.
I think the ultimate pie in the sky feature would be something more similar to gen_doc.
A script to autogenerate test notebooks that are pre-populated with tests stubs for all the functions in a module. This way it makes it easy for contributors to see what’s missing and fill the rest in.
Then on build/commit, we extract all the implemented methods to create the fastai/tests/test_*.py files exactly as they look like now.
I was able to completely generate this test file for nbtest from this notebook
Looking great!
I think it’d be good to include -sv in pytest -sv ..., so that if there is output it helps to understand what the test is doing. But perhaps not. I’m not attached. I just always use it when I develop tests.
the difficult part would be to predict the setup for the arguments of the test - but perhaps it can be stolen from existing tests that use the same argument(s)? Copy the beginning code up to the function variation - verbatim if it’s in the same test module?
and the 25 arguments some functions have will result in a total overwhelming skeleton - so perhaps it should go slow on the variations - but it’s definitely a good idea if you figure out how to do that - perhaps even if it’s for the newly written functions when they are fresh in the mind of the programmer and will be easier to fill out.
and then there were **kwargs - for which you need a programmable 8-ball to make the deductions. but actually this is probably not a problem at all - since they are passed through.
and the final most difficult part is to predict what to assert
I’m looking forward to seeing the auto-test-writer, @ashaw - and more brownie points for you if you manage to include some ML in it!
How is this related to the doc_test project? Unless the error started happening in conjunction with the new code, here, though I can’t see a connection.
Please file an issue with the full output of:
pytest -sv tests/test_text_data.py::test_should_load_backwards_lm_1 (I assume that this is the failed test since you didn’t paste the whole output)
If you are stuck, perhaps make a dedicated branch that demonstrates this problem, give me the info to which branch it is and how to reproduce the problem and I would be happy to have a look.
this error is unrelated, nothing to do with doctest. I pulled down the branch run pip install -e ".[dev] then make test. Having said that, can see the tests an azure all ran through I suppose - otherwise no merge.
azure CI has no GPU, so unfortunately we aren’t testing the same things and some tests don’t get run there at all.
In general, if you get a failure, you don’t need to compare to any other test setups, all that matters is that your setup is failing, hence please file an Issue as instructed above.
ok, anyways I am actually really stuck on one issue that should be super simple and I do wonder if there is a pytest bug with imports or I overlook sth obvious.
I adjust the this_tests to multipositional args like this_tests(*arg) also tried with arg,*args, **kwargs
Then in test_learn, importing
from fastai.gen_doc.doctest import this_tests
And yes, I tried all other things like import *
When I run make test: all is fine the multi arg call does not throw any errors.
But whenever I run this
pytest -sv -k tests/test_train.py
getting this error.
def test_lr_find(learn):
**> this_tests(learn.lr_find, Learner.lr_find)**
**E TypeError: this_tests() takes 1 positional argument but 2 were given**
Rather unclear to me what is happening … it seems as if pytest or my local config has issues with imports of functions with multiple arguments from packages in other folders: If I place a file doctesttwo into the same folder like the test scripts and import docttesttwo - it works (well, I get the error above, where I ask Andrew and you as it results from get_parent_func which I believe you adjusted?).
So maybe we just need to compromise a little here where we place packages for pytest, when the scripts test_ need to import them.
it’s a good opportunity for you then to learn what to do in such situations.
the first step is to isolate the problematic component. if you suspect pytest, then remove it from the equation.
write a plain python script and test that it works w/o pytest.
wrt:
pytest -sv -k tests/test_train.py
you don’t need -k unless you are specifying a specific test/pattern to run. But there is no harm using it, it just matches them all without the pattern.
Since you weren’t showing me your branch, I have no idea what you were trying to do and why it wasn’t working
But this fixes the problem:
I switched the test module you started experimenting with to “live” vars:
def test_fit(learn):
this_tests(learn.fit)
[...]
def test_fit_one_cycle(learn):
[...]
with CaptureStdout() as cs: learn.fit_one_cycle(cycle_length, lr)
this_tests(learn.fit_one_cycle, learn.recorder.__class__)
As you can see it’s easy with methods, but trickier with non-methods, since we don’t want them to return their contents.
So perhaps this_tests() could anticipate that someone is going to try:
this_tests(learn.recorder)
but that will send to this_tests the learn.recorder object and that’s not what it wants. So it should gracefully recover and say that the argument(s) is wrong and that it expects either methods or string (and the string should check it exists) - perhaps it does already, I haven’t looked.
And of course, eventually we will remove that second arg in this particular test, since it’s not being tested, but for now keep it as built-in testing.
and perhaps all these different edge cases should be actually tested in a dedicated test module. (tests/test_gen_doc_nbtest.py is probably the right place to add those to).
Actually, I’m not sure where would we use data that says that test foo exercises object Bar. Do we even want this? I’m not sure. But, nevertheless, test_this should be helpful to the test writer and anticipate objects/variables being passed to it instead of methods or strings.