Test Registry project

will do, can you point out what matters so next time will do? Couldnt find changes in a quick look but I probably overlook sth

You mean sort_key=true in json.dump (think its there) or sth else?

As to do’s (looping @ashaw also and suggestion for action owner):

  • Andrew: commit fastai/test_apid_db.json to the repo so Andrews code can use it (?)
  • Andrew: add test_api_db.json into MANIFEST.in
  • Andrew: make sure function derives class.method automatically
  • Benedikt: make PRs for test _this after agreement on details and Andrew committed change
  • Benedikt: test_this() should be able to receive multiple entries in one call, so it should be *args
  • Benedikt: a bigger puzzle to figure out is that the only time the json file is written is when ‘pytest’ completes with 100% success. You already have the check to run it only for non-specific tests, this is the 2nd half of the setup.currently, if I run ‘make test’ and abort it we get {} in the json file - not what we want obviously as we have just lost all the data.
1 Like

I think so, but it should be easy to see if it doesn’t.

2 more things that are still lacking (and perhaps are on your TODO, but they are in the spec):

  1. test_this() should be able to receive multiple entries in one call, so it should be *args

  2. a bigger puzzle to figure out is that the only time the json file is written is when ‘pytest’ completes with 100% success. You already have the check to run it only for non-specific tests, this is the 2nd half of the setup.

    currently, if I run ‘make test’ and abort it we get {} in the json file - not what we want obviously as we have just lost all the data.

1 Like

Both noted (must have overlooed the multiple args in the spec!)

wrt

we can catch this in pytest after report I believe and just block file creation or delete it. Will check !

1 Like

Please not yet, until the autogeneration of this file is sorted out (see last posts above). Currently it’d break the release process, because the latter relies on checking a clean git and currently make test will make it unstable if you remove it from ignores.
Until then Andrew can generate it by running ‘make test’.

1 Like

for a reason not entirely clear to me (must be about imports) when running pytest -sv -k tests/test_train.py

can get it to work only with having the function in the test class like e.g. test learner

def this_tests(*args):
    def register(*args):
        RegisterTestsPerAPI.this_tests(*args)

Getting error:

TypeError: this_tests() takes 1 positional argument but 2 were given

The issue only exists when calling pytest -sv -k tests/test_train.py

Import used (and I tried all other combos): from fastai.gen_doc.doctest import this_tests

When using make test, I can have that function in doctest only and the import in conftest works: from fastai.gen_doc.doctest import *

tried also with fixtures, no luck. issue is about the correct import only for multiposition arguments like *args.

Tips welcome, tried quite a bit and googled around

check your definition, unless you changed it, it expects 1 pos arg:

class RegisterTestsPerAPI:
    apiTestsMap = dict()

    @staticmethod
    def this_tests(testedapi):

Agh, we have another small complication pytest -k does only substring matching, so if we want to tell the user to run test test_foo and we have test_foo and test_foobar in test_x.py, it will run both. So either we need to use a very clunky:

pytest -k "test_foo and not test_foobar" tests/test_x.py

and add a whole bunch of code to “unmatch” things,

or we probably better switch to the node syntax, so in this case it’d be:

pytest tests/test_x.py::test_foo

I think the latter is a simpler and more robust solution. @ashaw, this is on your side - and thank you!

it also now makes it easy to have a whole bunch of them in one command:

pytest tests/test_x.py::test_foo tests/test_x.py::test_bar tests/test_x.py::test_tar

1 Like

Good catch! Just submitted a pr

1 Like

Played around skeleton test generation just to see if it would work in practice.

I was able to completely generate this test file for nbtest from this notebook

Pretty fun/awesome/convenient, but not as useful as I’d hoped. Stas is spot on - doesn’t fix the need to reference other tests and jump to source code. Going to kill this feature for now. Don’t think the benefits outweigh the cost of learning a new workflow.

I think the ultimate pie in the sky feature would be something more similar to gen_doc.
A script to autogenerate test notebooks that are pre-populated with tests stubs for all the functions in a module. This way it makes it easy for contributors to see what’s missing and fill the rest in.
Then on build/commit, we extract all the implemented methods to create the fastai/tests/test_*.py files exactly as they look like now.

1 Like

I was able to completely generate this test file for nbtest from this notebook

Looking great!

I think it’d be good to include -sv in pytest -sv ..., so that if there is output it helps to understand what the test is doing. But perhaps not. I’m not attached. I just always use it when I develop tests.

  1. the difficult part would be to predict the setup for the arguments of the test - but perhaps it can be stolen from existing tests that use the same argument(s)? Copy the beginning code up to the function variation - verbatim if it’s in the same test module?
  2. and the 25 arguments some functions have will result in a total overwhelming skeleton - so perhaps it should go slow on the variations :wink: - but it’s definitely a good idea if you figure out how to do that - perhaps even if it’s for the newly written functions when they are fresh in the mind of the programmer and will be easier to fill out.
  3. and then there were **kwargs - for which you need a programmable 8-ball to make the deductions. but actually this is probably not a problem at all - since they are passed through.
  4. and the final most difficult part is to predict what to assert

I’m looking forward to seeing the auto-test-writer, @ashaw - and more brownie points for you if you manage to include some ML in it!

sure changed that definition locally :wink: this *arg works fine when running all tests - just that pytest -k for one test fails.

Never mind, I keep on digging

@ashaw errors in tests/test_gen_doc_nbtest.py. run make test locally after new pull :wink:

Assuming you adjusted that during a PR for this exercise here?

@stas seeing an error in test_text_data.py also when running make test:

np.testing.assert_array_equal(batch_backwards, np.flip(batch_forward))

E TypeError: flip() missing 1 required positional argument: 'axis’

tests/test_text_data.py :92: TypeError

How is this related to the doc_test project? Unless the error started happening in conjunction with the new code, here, though I can’t see a connection.

Please file an issue with the full output of:

  1. pytest -sv tests/test_text_data.py::test_should_load_backwards_lm_1 (I assume that this is the failed test since you didn’t paste the whole output)
  2. python -m fastai.utils.show_install

Thank you.

If you are stuck, perhaps make a dedicated branch that demonstrates this problem, give me the info to which branch it is and how to reproduce the problem and I would be happy to have a look.

this error is unrelated, nothing to do with doctest. I pulled down the branch run pip install -e ".[dev] then make test. Having said that, can see the tests an azure all ran through I suppose - otherwise no merge.

azure CI has no GPU, so unfortunately we aren’t testing the same things and some tests don’t get run there at all.

In general, if you get a failure, you don’t need to compare to any other test setups, all that matters is that your setup is failing, hence please file an Issue as instructed above.

Thank you.

ok, anyways I am actually really stuck on one issue that should be super simple and I do wonder if there is a pytest bug with imports or I overlook sth obvious.

I adjust the this_tests to multipositional args like this_tests(*arg) also tried with arg,*args, **kwargs

Then in test_learn, importing

from fastai.gen_doc.doctest import this_tests

And yes, I tried all other things like import *

When I run make test: all is fine the multi arg call does not throw any errors.

But whenever I run this

pytest -sv -k tests/test_train.py

getting this error.

def test_lr_find(learn):

**> this_tests(learn.lr_find, Learner.lr_find)**

**E TypeError: this_tests() takes 1 positional argument but 2 were given**

Rather unclear to me what is happening … it seems as if pytest or my local config has issues with imports of functions with multiple arguments from packages in other folders: If I place a file doctesttwo into the same folder like the test scripts and import docttesttwo - it works (well, I get the error above, where I ask Andrew and you as it results from get_parent_func which I believe you adjusted?).

So maybe we just need to compromise a little here where we place packages for pytest, when the scripts test_ need to import them.

it’s a good opportunity for you then to learn what to do in such situations.

the first step is to isolate the problematic component. if you suspect pytest, then remove it from the equation.

write a plain python script and test that it works w/o pytest.


wrt:

pytest -sv -k tests/test_train.py

you don’t need -k unless you are specifying a specific test/pattern to run. But there is no harm using it, it just matches them all without the pattern.


Since you weren’t showing me your branch, I have no idea what you were trying to do and why it wasn’t working
But this fixes the problem:

1 Like

I switched the test module you started experimenting with to “live” vars:

def test_fit(learn):
    this_tests(learn.fit)
[...]
def test_fit_one_cycle(learn):
[...]
    with CaptureStdout() as cs: learn.fit_one_cycle(cycle_length, lr)
    this_tests(learn.fit_one_cycle, learn.recorder.__class__)

As you can see it’s easy with methods, but trickier with non-methods, since we don’t want them to return their contents.

So perhaps this_tests() could anticipate that someone is going to try:

    this_tests(learn.recorder)

but that will send to this_tests the learn.recorder object and that’s not what it wants. So it should gracefully recover and say that the argument(s) is wrong and that it expects either methods or string (and the string should check it exists) - perhaps it does already, I haven’t looked.

And of course, eventually we will remove that second arg in this particular test, since it’s not being tested, but for now keep it as built-in testing.

and perhaps all these different edge cases should be actually tested in a dedicated test module. (tests/test_gen_doc_nbtest.py is probably the right place to add those to).

Actually, I’m not sure where would we use data that says that test foo exercises object Bar. Do we even want this? I’m not sure. But, nevertheless, test_this should be helpful to the test writer and anticipate objects/variables being passed to it instead of methods or strings.

I also switched to plural *testedapis, so it’s more intuitive to read the code.

1 Like