Test Registry project

its just what I had in my file @stas! So, am a little embarrassed I cause you work here, but this is some strange local issue then I suppose … Its a trivial task, am aware.

I am downloading your change and will run this locally to check

No need to be embarrassed, @Benudek, it’s normal to get stuck and it’s normal to not see something obvious, especially when tired. I often encounter similar situations myself and sometimes it takes me an hour to find an obvious silly mistake…

1 Like

and:

Actually, I’m not sure where would we use data that says that test foo exercises object Bar . Do we even want this? I’m not sure. But, nevertheless, test_this should be helpful to the test writer and anticipate objects/variables being passed to it instead of methods or strings.

if we only pass method and function names, perhaps a better name is needed, since apis also reads odd. api is already plural.

def this_tests(*funcs): perhaps? no need for tested preamble either, it’s clear from the context.

ok, good to hear! Actually, I hate mostly to cause others work - at least I can say I really did spend a lot of time trying. I downloaded your code and it just works ! But I can see no difference whatsoever to what I did.

I will look at this other task: do not create that json file, if there are errors in make test then.

thx @stas

1 Like

No need to ask, always file such findings as it’ll help other users to find a solution.

Too bad that we no longer know which numpy version you had a problem with. Perhaps your pip install log is still around (check the end of it where it says uninstalling…). i.e. we may need to require a higher numpy version in our dependencies.

Also please try to keep the thread on topic, this is again totally unrelated. Please either start a new thread or move this discussion to the issues.

1 Like

When you’re ready to improve things refactor this_tests to use:
https://docs.python.org/3/library/collections.html#collections.defaultdict
it’ll cut the code size to half.

wrt

I cannot reproduce this behavior. When I have errors or e.g. insert an assert that will fail, there is json file written. From the doc would expect addfinalizer writes always and we need yield to write only an success, but on my tests things behave as desired (only writing on success) with the finalizer.

I did have a couple of {} in previous runs before, while changing code but this was related to issues in that code.

That was just an example. The point is that it must never write a partial set of entries. That’s why the only time it should create/update this file is when all tests are run and none-failed. A single failed test that had test_this call in it, will result in lost data from test_api_db.json. Do you understand why there are these very specific requirement?

The other approach is not to overwrite the file, but to update it with every test run, but that is a very problematic challenge, since it requires a much more complex code (or switching to an actual db), and at the very least because you won’t be able to purge old outdated data. So you will still need to run all tests before you will be able to confidently discard all outdated entries. Hence the overwrite solution, and only when all tests are 100% correct.

1 Like

I believe I understand the requirement but it seems already existing. When I run pytest -sv or make test as soon as there is an error no file is written. I didn’t expect that from the pytest docs and assumed we need to work with the yield function but my tests suggest it’s fine.

Additionally, wrt to your example we could simply ask if the dictionary is empty we do not write a file. I cannot see how this situation would ever occur, I only had it while developing and messing up code to build the doc. We can also ensure when we build the dictionary that we enter them only if all info, also the lineno is given, but afaict that is current behavior so we should not half like half commits.

I do not fully understand why we need this behavior though, it seems acceptable to also register a failing test? It’s still a test and supposedly will be fixed soon.

So, with my limited pov the current functionality is actually ready to go?

I do not fully understand why we need this behavior though, it seems acceptable to also register a failing test? It’s still a test and supposedly will be fixed soon.

It depends where it fails, it could fail before the this_tests happens. Then it would be inconsistent.

So, with my limited pov the current functionality is actually ready to go?

I don’t know, once you think it’s complete let’s merge your changes and then have a look and see.

Not waterproof, but typically I would call this_tests as 1st line (so no room for failure). Even if this_tests is called later on and there is an error before: So yes, this failed test is not registered while other failed tests are. But maybe this would be building code for the temporary exception flow really? Typically, if we have failing test scripts they should be fixed, fast.

I merged my stuff (and you kindly added the *args), I only have this code improvement on my list: https://docs.python.org/3/library/collections.html#collections.defaultdict but it shouldn’t change functionality

I think next stage would be merging my and @ashaw things and testing it all as one and see how it works?

OK, I thought there was more code to commit from your side. I gave a quick test.

  1. If I start ‘make test’ and then interrupt it I end up with {} in fastai/test_api_db.json - i.e. this is not quite ready yet.

  2. is still doesn’t handle incorrect arguments
    this_tests(learn.recorder)
    or:
    this_tests('blah')

while (2) is an improvement, (1) is a showstopper.

Oh yeah I had to add an import inspect at the top for one of the tests. So not completely from notebook

ok, I didnt do that test to manualy interrupt indeed.

For this specific part I can look at yield but probably we should just say when ever there is an empty dictionary, we just don’t write the file and we are good :slight_smile:

I will look at it soon, wanted to check your comment on this here too: https://docs.python.org/3/library/collections.html#collections.defaultdict

That might be a little tricky: because how would I know what is an existing API - inspect code? I was rather thinking later when calling up the docs, one needs to handle that. @ashaw Essentially I would think there is a dead entry in the json db and folks just never call doc_test for such an API, that doesn’t exist.

It might look to me a little like we trying coming with waterproof code for a situation that rarely happens with little consequences, E.g. someone interrupts the testrun, personally would not be too surprised then some test functions might not work properly and just rerun things. I get one has to write code that delivers clean, atomic commits when sending money between bank accounts but for this docs that might be acceptable. 2nd scenario: some coder registered the API incorrect, that means there is dead entry in the json db and the doctest wont work with that test. Or do I overlook some more serious consequences here?

Sorry, no cookie. It will be {} only if none of this_tests calls were made, which in is the case now when only one test module has those calls. Try to think globally and not circumstantially here.

Remember that ideally you should find a way to mark ‘pytest’ completed 100% and w/o errors, and use that. If such is possible and if not invent one. I trust you can, @Benudek

You check that this symbol exists. https://docs.python.org/3/library/functions.html#id

python -c 'import fastai; print(id(fastai));'
140380881709000

You will need try/except for when it’s not a python object.

Also numbers will always id as true:

python -c 'import fastai; print(id(123));'

so you should exclude this case too. There are might be other cases. I’m sure someone already wrote code on SO that covers them all.

update:, no this won’t work since a random string will have id too. Need to research this some more.

But actually, I think you just need to try/except that block where you try to resolve that argument - since it’ll fail if it’s a random string as it does now.

@ashaw, the doc_gen probably already contains some code that can check whether a random string can be mapped to a fastai API, no? is_api("this should assert") vs is_api("fastai.Learner.foo")

@stas its all possible likely (e.g. coming up with a way to mark pyest completed) - but do we really need it and do we use our workforce here well?

  • If a test fails, we should still register it imho. In any case, it should be fixed soon
  • I submitted a small PR to write the dictionary only if it is non empty. Under normal circumstances that shouldn’t happen, but you reported you had an empty {} - I could not reproduce that locally with interrupting make test, but hope this check (its anyways a solid check) would avoid that.
  • Looked at: https://docs.python.org/3/library/collections.html#collections.defaultdict I understand in doc test we could maybe avoid some lines, but personally to me that looked a little hard to read also. In any case, it wouldn’t impact functionality afaict?
  • Handling incorrect arguments: So essentially we would block coders from writing wrong code. Is that such a likely situation for registering a test with this_tests and wouldn’t it be a a minor bug in any case?

Am not saying it would be wrong doing all this, it would be perfect! Having said that, maybe we should go for a solid alpha release and discuss this open question to start registering tests with this_tests,

generate the doc and then iterate over the code with improvements?

Just my 2 cents, if I overlook the urgency of these adjustments, am all ears and will see what I can do ! :wink:

It’s because you can’t see the impact an incomplete test_api_db.json will have on our setup, you feel that some of those things are OK, but they are not. test_api_db.json must never change in the eyes of git unless this_tests was added/removed somewhere in the tests. Moreover, we have a situation where some tests get skipped by default, but then can be run with a specific flag. And as such, the test_api_db.json will always fluctuate unless we set a very specific way when it gets generated.

So, it’s possible that instead of trying to dynamically figure out when to update it and when not to, we introduce a new pytest flag (conftest.py), say -doc, which when set will update test_api_db.json. And just like we update all the docs in a routine fashion, it could be added to that routine - run the tests first, the do global docs update second. And the same for release, the flag will be required to be included in make test-full before making a release.

-doctest is already used by pytest for other things, so maybe pytest -regtestapi?

wrt, improvements, yes, those can wait. I have never insisted that they should be out there for alpha. But that’s if you mean alpha includes one test file. If you start putting those calls in other tests it’d be very helpful to have all the guards in place to avoid needing to hunt down the errors - w/o checking it’s too easy to make a mistake and not know you made it. This is for the same reason I suggested we pass real object methods and not strings. But I guess if those checks do get added later, they will flag those mistakes. Still, that’s a wasted effort. But it’s absolutely your call enhancement-wise.

1 Like

So, I understand now where you coming from. If this is a given

then we need to indeed keep the file stable and should e.g. exclude failed tests. I am not sure why exactly this is needed (so you want to always check in test_api_db.json to git because you need it for the online doc, right? But what would be the big issue, if the doc w.g. shows a failed test?) but I am sure then you are correct and we should indeed tackle this.

Now to the way to implement this. The static version you describe could be sufficient but the dynamic part doesnt seem quite that hard (lets be optimistic), so let me just try the following.

It seems we just need to access a test report and inspect it in conftest / stop_doctest_collector. As soon as we find failures, we never write the file - that is what you mean, right? No complications with skipped tests and so on?

2 reasons:

  1. it must not be a yo-yo and hurdle for us developers, where this file keeps on changing and we have to commit an ever changing file, once removing bits once re-adding them. (unless new tests are added, then the commit will go with the new test)
  2. it will break a release process which relies on a non-dirty git status

if it’s not under git, then it doesn’t matter. But if you want users to be able to use it, it has to be under git.

Now to the way to implement this. The static version you describe could be sufficient but the dynamic part doesnt seem quite that hard (lets be optimistic), so let me just try the following.

No, let’s change gears and implement a simply pytest flag as I suggested in the last post. Otherwise we will never finish this and there are probably other situations that we haven’t envisioned. With the flag to save this file a maintainer has a full control at when to update this file, which I believe when the main docs are updated.

1 Like

Write test json db only if all tests passed & no empty dictionary

I just checked the dynamic part and that function pytest_runtest_makereport just following the example code gets called at the end of individual tests, not after the entire test run.

Not exactly the trigger we want (after all tests ran accessing all results would be ideal) and it seems the name ‘pytest_runtest_makereport’ is needed also to call it. Still, if we just set a flag if things failed we could use that, so placed that into a PR for your review: https://github.com/fastai/fastai/pull/1673

Good you were anticipating that issue, that complication was totally unknown to me!

1 Like