Thank you for clarifying, now that I read the sample notebook I can clearly see what you were proposing.
I think your idea is really really cool, but I’d love to see it actually working and resulting in PRs with new tests. The problem is not to write a skeleton but to know what to test and to know what the good and the bad results are.
I personally find it a way easier to copy an existing test and adjusting it to my needs. I would have a very hard time making sense from a skeleton from your sample that gives me:
instance = ItemList(items: Iterator, path: Union[pathlib.Path, str] = '.', label_cls: Callable = None, xtra: Any = None, processor: Union[fastai.data_block.PreProcessor, Collection[fastai.data_block.PreProcessor]] = None, x: 'ItemList' = None, ignore_empty: bool = False)
For the same reason I find that I don’t look up the fastai doc API, since I can’t make any sense of these entries. Those are written for computers and are not made for human consumption. It’s possible to eventually understand what such a declaration says, but I instead read tutorials, search forums, grep code, notebooks and tests to understand what the function does.
But, please, please, don’t let my grudges discourage you from innovating and creating amazing things, we are all different and different things work for different people.
I really don’t care if we have 20 different ways, guides, autogenerators, magic potions, and what not as long as the result is that we have a solid test suite and we can make a confident new release at any point in time and not need to let new code to sit and wait for brave early adopters to do the validation instead of having a test suite to do that.
For example we had to recall the last relese because our test suite missed an important regression.