Test Registry project

Think inviting tests would be great, one of the main (hopefully successful) motivations for this work here.

Just wondering if the main how to of the forum thread should go into the official doc and the forum is just then pure coordinating.

Hope it makes sense

Great suggestions! Just added all these fixes on master

I probably won’t get around to the jumping collapsible UI until Monday though

1 Like

Can you remind me what Learner.data refers to again?

If you are talking about modules/classes, it should work already:
show_test(Learner)
show_test(vision.data)

I guess that wasn’t a good example, please try:

def test_foo(learn):
    learn = fake_learner()
    this_tests(learn.loss_func)

loss_func is an example of a property attribute.

Great suggestions! Just added all these fixes on master

That looks good. updated just one doc file with your changes, e.g. https://docs.fast.ai/basic_train.html#Learner.create_opt

I probably won’t get around to the jumping collapsible UI until Monday though

Sounds good, @ashaw

I’m not sure I like:

this_tests('skip')

It could be too confusing in the test, where skip has a special meaning. Probably, should pick another short name instead, but I can’t think of a good word so that it communicates a complex thing.

Basically we are trying to say one of the two:

  1. this tests something that is not a fastai API
  2. this tests a property and it is not callable

Two very unrelated situations to be expressed in one intuitive word.

this_tests('none') is the only thing that comes to mind, but that’s not true either because the test is testing something.

Looking through synonyms: omit looks intuitive, and probably won’t be used as a real function anywhere.

update: ok, I think I sorted it out - I changed it to this_tests('na') na = not applicable, not available, etc. - covers all bases and is short.

Yes, all done. Just needing to tune up the UI, then we can announce it.

There are a few edge cases in logic when tests get skipped, but we will sort it out in time.

1 Like

Great work, thx for you leading and executing @stas

1 Like

Pushed some UI updates to master and rebuilt the docs.

Thanks for your suggestions! Followed as closely as I could and definitely looks much better. Send any more improvement ideas my way =)

Started on supporting property attributes learn.loss_func for show_doc and show_test. It works for now, but there’s a lot of edge cases. It’ll be an ongoing thing.

2 Likes

That’s much better, @ashaw! This definitely works for me UI-wise. Thank you!

Started on supporting property attributes learn.loss_func for show_doc and show_test. It works for now, but there’s a lot of edge cases. It’ll be an ongoing thing.

Amazing!

1 Like

should show_test be auto-imported along with show_doc? So that the user does’t need to:

from fastai.gen_doc.nbtest import show_test before being able to use it?

Sounds very reasonable.

To clarify: are you talking about adding doctest to imports here?

Or importing show_test at the top of all the *.ipynb documentation notebooks?

Hmm, I swear I remember seeing Jeremy calling show_doc in one of the classes and it didn’t look like he needed to import anything. If I grep show_doc now, it’s in none of the course notebooks. Do I mix it with some other function?

That’s why I thought that if show_doc gets imported with fastai.basics then to add show_test there.

Ah, found it, it was doc(func) and it has [test] built in already, so I think that covers it.

All is perfect then.

So it looks like the UI is complete. Awesome!

1 Like

We do have an issue with skipped tests - e.g. I now have a few tests that run only on setups that have an exact gpu model as mine, since other models get very different memory footprint and I haven’t figured out yet how to make those generic. So when I run make test-full it adds this_tests for those tests. But then when Sylvain runs the same, he gets those skipped, so those items get removed from the test registry. And we have a yo-yo then.

So my initial hope of just overwriting the text registry was shortsighted. It needs to be updated instead and have a way to purge stale items that are no longer true. Can json be easily updated? in such situations I used to use libdbm or something similar, but have never done this with json.

or can it help to classify tests? Like these more infra-oriented tests get some special tags when registering with this_tests and then we handle them accordingly / appropriately. When run from different platforms, they might not go to the docs at all or in a different parts or we just dont write them to that json for now

I saw the cursor behaved differently when going test vs source. source gives that hand, while test doesnt?

One of them is a hyper-link, another is collapse/uncollapse event handler.

I can’t see how that would help. We just need to be able to grow the test registry, so if somewhere a test is skipped it doesn’t get removed from the registry, since from the point of the user for whom we created this feature the test exists.

so it’s a pure technical change from overwrite to update (And doing an occasional overwrite to kill any no-longer existing entries).

any tests that are platform dependent could be marked and just excluded for now or get written to a different json for later consideration. It is a different kind of test really than these functional testdata plus asserts.

When we start updating, we always need to know, when to create a new one (e.g. when we find no file) or have some kind of upsert statememt

This is not it. It’s the same platform, and you’re unaware of some other edge cases, which I can’t explain at the moment, that are appearing as we speak. None of those edge cases existed when we designed the spec, and appeared after it was implemented.

Bottom line - need to be able to update the registry, while treating all tests the same.

1 Like