It could be too confusing in the test, where skip has a special meaning. Probably, should pick another short name instead, but I canât think of a good word so that it communicates a complex thing.
Basically we are trying to say one of the two:
this tests something that is not a fastai API
this tests a property and it is not callable
Two very unrelated situations to be expressed in one intuitive word.
this_tests('none') is the only thing that comes to mind, but thatâs not true either because the test is testing something.
Looking through synonyms: omit looks intuitive, and probably wonât be used as a real function anywhere.
update: ok, I think I sorted it out - I changed it to this_tests('na') na = not applicable, not available, etc. - covers all bases and is short.
Pushed some UI updates to master and rebuilt the docs.
Thanks for your suggestions! Followed as closely as I could and definitely looks much better. Send any more improvement ideas my way =)
Started on supporting property attributes learn.loss_func for show_doc and show_test. It works for now, but thereâs a lot of edge cases. Itâll be an ongoing thing.
Thatâs much better, @ashaw! This definitely works for me UI-wise. Thank you!
Started on supporting property attributes learn.loss_func for show_doc and show_test. It works for now, but thereâs a lot of edge cases. Itâll be an ongoing thing.
Hmm, I swear I remember seeing Jeremy calling show_doc in one of the classes and it didnât look like he needed to import anything. If I grep show_doc now, itâs in none of the course notebooks. Do I mix it with some other function?
Thatâs why I thought that if show_doc gets imported with fastai.basics then to add show_test there.
Ah, found it, it was doc(func) and it has [test] built in already, so I think that covers it.
We do have an issue with skipped tests - e.g. I now have a few tests that run only on setups that have an exact gpu model as mine, since other models get very different memory footprint and I havenât figured out yet how to make those generic. So when I run make test-full it adds this_tests for those tests. But then when Sylvain runs the same, he gets those skipped, so those items get removed from the test registry. And we have a yo-yo then.
So my initial hope of just overwriting the text registry was shortsighted. It needs to be updated instead and have a way to purge stale items that are no longer true. Can json be easily updated? in such situations I used to use libdbm or something similar, but have never done this with json.
or can it help to classify tests? Like these more infra-oriented tests get some special tags when registering with this_tests and then we handle them accordingly / appropriately. When run from different platforms, they might not go to the docs at all or in a different parts or we just dont write them to that json for now
I canât see how that would help. We just need to be able to grow the test registry, so if somewhere a test is skipped it doesnât get removed from the registry, since from the point of the user for whom we created this feature the test exists.
so itâs a pure technical change from overwrite to update (And doing an occasional overwrite to kill any no-longer existing entries).
any tests that are platform dependent could be marked and just excluded for now or get written to a different json for later consideration. It is a different kind of test really than these functional testdata plus asserts.
When we start updating, we always need to know, when to create a new one (e.g. when we find no file) or have some kind of upsert statememt
This is not it. Itâs the same platform, and youâre unaware of some other edge cases, which I canât explain at the moment, that are appearing as we speak. None of those edge cases existed when we designed the spec, and appeared after it was implemented.
Bottom line - need to be able to update the registry, while treating all tests the same.