Am I the only one who feels confused about the function/naming choices or consistency in the library?
A few examples:
Learner =>
opt_func / callback_fns why not opt_func / callback_funcs? or opt_fns / callback_fns?
Saving and loading models vs. Deploying your model: load / save vs. export / load_learner why not import ?
predict vs pred_batch - is abbreviation really necessary?
lr_find given that all(?) other methods are verbs, why lr_find and not find_lr?
The semantics of module transforms are a bit different if I understand correctly, for images is on-the-fly augmentations, but for text and tabular are really pre-processing. I found about this when I wanted to implement on-the-fly permutations to a specific column in a tabular learner and then realized the semantics of what they do are completely different than in an images.
I would be happy to give this a try and do a PR of the library with consistency changes, but given that a) naming is a religious issue for some; b) it would break compatibility completely - I wanted to know if more users feel the same or this is a non-issue…
My view is that changing names to make them internally consistent is a good idea. I’d be most happy to look at a PR - although it would need to include a PR for the course-v3 notebooks and docs too, so it’s quite a big of work!
Could you consider a fairly substantial change to something like this which will break a lot of scripts people have written as a 1.1 release rather than just pushing it out?