Test Registry project

@stas Independently of above PR to adjust the json db only after success: I do agree this better be that semi-manual process you described as we might overlook complications and just wanna finish this.

–> see PR here: https://github.com/fastai/fastai/pull/1673

Generate json db only on full Release:

new pytest flag -regtestapidb:

  • in conftest.py check with sys.argv
  • only if this param was used when calling, then we will regenerate test_api_db.json .
    File Makefile:
  • the flag will be required to be included in make test-full before making a release:
    test-full: ## run all tests, including slow ones, print summary
    pytest --runslow -ra -regtestapidb

Process:

  • when we update all the docs in a routine fashion, we will regenerate the tests first
1 Like

@stas, can see the value of blocking incorrect usage of this_tests before registering, summarising that below.

But a try except pattern to e.g. not register

this_tests(‘blah’)

seems like a non-pattern: it seems better the developer putting this in simply gets the exception and the PR can never go through rather than hiding the exception.

I think you want to block registering cases like this_test(print) ? in PR: https://github.com/fastai/fastai/pull/1673 :slight_smile:

this_tests(learn.abc) --> non existing var / method. This will just throw an exception for the developer
this_tests(1234) --> some number, gives an exception - good
this_tests(Learner.fit) --> take the instance version -block? I guess sometimes we call class methods
this_test(print) --> its not a fast.ai function. doctest.full_name_with_qualname gives builtins.print, so I filter this out with a match 

Blocking incorrect Arguments in this_tests

Goal:

  • if a coder adds an incorrect argument, one that is not a fast.ai method e.g. this_tests('blah') we will not inject this string into the test api db
  • this would be specifically useful if we can dispatch the task as ‘low hanging fuit’ to other coders

Implementation:

  • try/except the argument where you try to resolve that argument. A fast.ai API should not cause an exception where was a random string will.

      python -c 'import fastai; blah;' --> exception
      python -c 'import fastai; learn.fit'
    

Remark: will this work? E.g. if we give incorrect params we also get errors even though the function might be part of fastai?

  • use inspect python: check if function, right package and so on to determine it is a fast.ai function

OK, I did some reorg/cleanup and tried to give more consistent/intuitive names, so the last incarnation is: TestAPIRegistry and the flag is matching --testapireg. So we are creating a Test API registry. Hope that is an intuitive change. Too bad doctest is already used for something very different in the python world.

Note, I moved almost everything out of conftest into doctest.py, and it’s starting to emerge into a normal class, so we might switch to that eventually, instead of using class methods. But it’s fine for now.

I implemented the check for valid fastai API. Your code already had everything that was needed, just needed to add a few exceptions and word them so that it’s quickly easy to see what needs to be done.

I added a test that tests test_this. Please check that all cases were covered.

For your convenience, in conftest.py I left: #individualtests = 0 so you can uncomment it to do quick tests, without needing to finish make test.

I will go over your notes earlier and make a new summary, including new issues:

  • Some tests will sometimes add the same entries more than once, see this example. So perhaps we should check for already existing entries in our registry? Surely, the test could be modified to hardcode "fastai.basic_train.load_learner" but why put more onus on the developers when we can easily remove dups automatically.

I reviewed your previous comments and unless I missed something I think the only unadressed item is:

  • I use inspect.currentframe while I seem to remember we want to just say currentframe and import inspect accordingly

but I’m not sure what you mean there.

1 Like

Added support for strings as arguments.

So now you can do:

    # function by reference (and self test)
    this_tests(this_tests)

    import fastai
    # explicit fully qualified function (requires all the sub-modules to be loaded)
    this_tests(fastai.gen_doc.doctest.this_tests)

    # explicit fully qualified function as a string
    this_tests('fastai.gen_doc.doctest.this_tests')

(side effect to fix from the previous comment of mine - this now registers this_test 3 times!)

and these will fail with either ‘not a function’ or ‘not a fastai api’ errors:

    # not a real function
    this_tests('foo bar')

    # not a function as a string that looks like fastai function, but it is not
    this_tests('fastai.gen_doc.doctest.doesntexistreally')

    # not a fastai function
    import numpy as np
    this_tests(np.any)

1 Like

I mean

sth like

from inspect import currentframe

so we say currentframe instead of inspect.currentframe. Think somewhere you mentioned we want to avoid this package notation ?

Should check and make a small PR :wink:

1 Like

great stuff, thx @stas for getting this over the finish line.

When we are good to go, we could put up guidelines, how to use this_tests and then can start ?

I was thinking to make a one PR per test_ file so we can commit them independently, if sth is missing. I did se in test_train you added a a Recorder as tested next to fit. So, you suggesting we also add APIs tested that are indirectly tested as used by the main function?

No, it was just an example to test multiple args, I removed it. Thank you for remembering that odd case.

Ha, now test_this_tests inserts test_this 5 times (!) - so we should definitely remove dups :slight_smile:

We could “test_this” objects too, but that’s if it’ll help the docs. Then yes, why not.

I suggest we wait till @ashaw gets the docgen side fully operational. In case we discover some gotchas we haven’t thought of.

There are now enough to work with to build the docs. So let’s see how that unfolds and then expand into other tests?

Andrew, make test-full will now generate the registry file. Of if you don’t want to wait , comment out #individualtests = 0 in conftest.py and then run:

pytest -sv tests/test_gen_doc_nbtest.py tests/test_train.py tests/test_basic_train.py --testapireg

Let us know if you need anything else to complete the bridge. Thank you.

1 Like

ok, afaict no action on me.

Might pick some normal test scripts, when I find the time.

thx Stas, fun initiative!

1 Like

There is one as posted above:

e.g., now test_this_tests test inserts test_this 5 times (!) - so we should definitely remove dups :slight_smile:

1 Like

ok, let me check this

1 Like

Yes, please.

Does pytest have a hook/callback that will be called at the end of each test?

If it does we could check whether each test has this_tests to help new test writing, so that this function won’t be forgotten? And give a warning if a test has no this_tests in it? Otherwise it’s difficult to enforce optional, yet, important things.

this

@pytest.fixture(scope=“session”, autouse=True)

def doctest_collector_start(request):

in conftest gets called at the end of each single test, so here we could give a warning.

How would we inspect and what is the format of the warning? I think it would be good, e.g. now I manually make a report on such things, better automated and added to the standard report on test coverage

def doctest_collector_start(request):

no, it is run only once per session. session is one for all tests.

How would we inspect and what is the format of the warning? I think it would be good, e.g. now I manually make a report on such things, better automated and added to the standard report on test coverage

We don’t inspect code, but we extend TestAPIRegistry to do something about it, so that we know that a test was run and it didn’t call test_this - even a simple flag will do. It will get set before each test, get reset by test_this, and at the end of the test we will know whether the test did its duty or not. That means we need a global hook for pre- and post- test runs. Most likely pytest has that - just need to figure out how this is done.

A message could be simple - please add this_tests(what you test) to test test_foo in tests/test_bar.py.

1 Like

Warning when this_tests not used

Sorry, copied in the wrong function:

@pytest.hookimpl(tryfirst=True, hookwrapper=True)

def pytest_runtest_makereport(item, call):

So this runs after tests: when you ran make test you have lots of calls of this function (checked with debug), not sure if after every test method or file. But for sure not just one call after all tests ran.

I havent looked very hard yet for a before test event.

Complication:
But note, the dictionary is built up along APIs as keys, so the moment we just run along tests and wanted to enter a flag ‘np this tests used’ we would need to make a dummy API entrance and then take that entry out there again. Maybe there is sth more elegant or we open a 2nd json to log all tests that do not have this_tests called in a different format, where there is no API key

yes, it runs this hook 3 times per test (or 2 if skipped) - once for each stage: setup, call and teardown, so we would need this:

@pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
    outcome = yield
    res = outcome.get_result()
    if res.when == "setup": print(f"setup: raise flag")
    if res.when == "call" and res.failed: TestAPIRegistry.tests_failed()
    if res.when == "teardown": print(f"setup: check/reset flag")

except instead of print calls to TestAPIRegistry to do the work:

    if res.when == "setup": TestAPIRegistry.this_tests_flag_on()
    if res.when == "call" and res.failed:  TestAPIRegistry.tests_failed()
    if res.when == "teardown":  TestAPIRegistry.this_tests_flag_check()

and this_tests call will reset the flag.

So that we have a non-alarm situation:

  1. raise flag
  2. this_tests resets flag
  3. all is good

and the alarm situation:

  1. raise flag
  2. this_tests doesn’t get called and thus doesn’t reset flag
  3. we detect the flag wasn’t reset and print a warning (including file::test string)

Let me know if you’d like to implement that. It will be 2 more tiny functions and ~5 lines of code.

1 Like

Can do, I was considering to at the start of all tests initialize the dictionary will apis of fast and an entry nothistests call. Nit sure how to get that list but maybe inspect gives that.

Then we replace with this_tests, what ever is left then has that marker. Simple, disadvantage maybe we do not give a warning that this_tests is missing while the test runs

It’s a way, but it sounds a way more complicated. I think my proposed solution is much simpler. Except, probably compile a list of the module.py::tests that lack this_tests and print them all out at once at the very end of the test suite, rather than one by one.

1 Like

ok, will do asap into a small PR

1 Like

Sorry I think I missed something. Can you remind me what’s still needed from the docs side?
Just need to include the test_api_db.json right?
Otherwise, show_test should be working right now