I’m getting this error on both crestle.ai and google colab when going through the lesson1 workbook but using some kaggle data instead of the dog/cat breeds
I’m loading kaggle data with
!kaggle competitions download state-farm-distracted-driver-detection
it downloads fine, and I unzipped the images, creating train and test directories
I loaded the data in jupyter with this:
tfms = get_transforms(do_flip=False)
data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=112)
and inspecting the images looked reasonable using
data.show_batch(rows=3, figsize=(7,6))
then I tried to train using this:
learn = create_cnn(data, models.resnet34, metrics=error_rate)
learn.fit_one_cycle(4)
and the process ran for a complete epoch, then generated the error spew below when the epoch seemed to finish. I thought perhaps the problem was that the kaggle data came with a train and test directory but not a valid directory, so I copied /test into /valid, but still got the same error.
any ideas? it feels like maybe imagedatabunch didn’t register the valid directory but I’m not sure how to diagnose
Here’s the complete spew:
/usr/local/lib/python3.6/dist-packages/fastprogress/fastprogress.py:95: UserWarning: Your generator is empty.
warn(“Your generator is empty.”)
RuntimeError Traceback (most recent call last)
in ()
----> 1 learn.fit_one_cycle(4)
/usr/local/lib/python3.6/dist-packages/fastai/train.py in fit_one_cycle(learn, cyc_len, max_lr, moms, div_factor, pct_start, wd, callbacks, **kwargs)
20 callbacks.append(OneCycleScheduler(learn, max_lr, moms=moms, div_factor=div_factor,
21 pct_start=pct_start, **kwargs))
—> 22 learn.fit(cyc_len, max_lr, wd=wd, callbacks=callbacks)
23
24 def lr_find(learn:Learner, start_lr:Floats=1e-7, end_lr:Floats=10, num_it:int=100, stop_div:bool=True, **kwargs:Any):
/usr/local/lib/python3.6/dist-packages/fastai/basic_train.py in fit(self, epochs, lr, wd, callbacks)
160 callbacks = [cb(self) for cb in self.callback_fns] + listify(callbacks)
161 fit(epochs, self.model, self.loss_func, opt=self.opt, data=self.data, metrics=self.metrics,
–> 162 callbacks=self.callbacks+callbacks)
163
164 def create_opt(self, lr:Floats, wd:Floats=0.)->None:
/usr/local/lib/python3.6/dist-packages/fastai/basic_train.py in fit(epochs, model, loss_func, opt, data, callbacks, metrics)
92 except Exception as e:
93 exception = e
—> 94 raise e
95 finally: cb_handler.on_train_end(exception)
96
/usr/local/lib/python3.6/dist-packages/fastai/basic_train.py in fit(epochs, model, loss_func, opt, data, callbacks, metrics)
87 if hasattr(data,‘valid_dl’) and data.valid_dl is not None:
88 val_loss = validate(model, data.valid_dl, loss_func=loss_func,
—> 89 cb_handler=cb_handler, pbar=pbar)
90 else: val_loss=None
91 if cb_handler.on_epoch_end(val_loss): break
/usr/local/lib/python3.6/dist-packages/fastai/basic_train.py in validate(model, dl, loss_func, cb_handler, pbar, average, n_batch)
55 if n_batch and (len(nums)>=n_batch): break
56 nums = np.array(nums, dtype=np.float32)
—> 57 if average: return (to_np(torch.stack(val_losses)) * nums).sum() / nums.sum()
58 else: return val_losses
59
RuntimeError: expected a non-empty list of Tensors