I was about to report the following bug on the issue tracker, but I’m not sure what the community thinks about this sort of bug. I’m teaching classes based on fastai and I notice my students tripping on various things that don’t crash, but something has definitely gone wrong, so I think fastai should crash. But when I read the code I don’t see a lot of existing guards against misuse, so I wonder if contributing issues or code something like that would be welcomed or shunned.
Here’s the specific bug report (and here’s another example I did submit earlier):
Try reducing the training set size to less than the batch size. e.g.,
from fastai.vision.all import * path = untar_data(URLs.PETS)/'images' def is_cat(x): return x.isupper() images = get_image_files(path) images = images[:64] dls = ImageDataLoaders.from_name_func( path, images, valid_pct=0.2, seed=42, label_func=is_cat, item_tfms=Resize(224), bs=64) learn = cnn_learner(dls, resnet34, metrics=error_rate) learn.fine_tune(1)
You get a
nan training loss but otherwise everything looks normal; a newbie doesn’t know what to do with this.
Neither of the two fits in
fine_tune did anything at all because
len(list(dls.train)) is 0. (
dls.train.drop_last is True by default for a good but subtle reason related to batch normalization.) I’d expect this to fail noisily because we asked the learner to train but it didn’t do anything.
You do get a warning
/usr/local/lib/python3.7/dist-packages/fastprogress/fastprogress.py:74: UserWarning: Your generator is empty. but that doesn’t really say what the problem is (and I’ve seen people have this problem and not get the warning).
A patch addressing this bug could, for example, raise an exception if no steps were taken in an epoch. But I could imagine pushback against that patch that (1) it could possibly break some legitimate workflow or (2) “why would you ever do that?”. But students don’t know not to do that.