"Generator is empty" when running .get_preds()

I save a model after fine-tuning a cnn learner on the Pets dataset with:
learn.export(“pets.pkl”)

I access it back:
learn_inf = load_learner(‘pets.pkl’)

It loads the .pkl file and I can make single predictions with no errors:
learn_inf.predict(image)

But when I try to make predictions on the validations set as follows:
learn_inf.get_preds()

I get:
/opt/conda/envs/fastai/lib/python3.8/site-packages/fastprogress/fastprogress.py:74: UserWarning: Your generator is empty.
warn(“Your generator is empty.”)

Why is my generator empty?

You have to pass a dl to use get_preds. You have an example here

1 Like

I forgot to mention that I also tried:
learn_inf.get_preds(dl=dls)

where dls was defined from a DataBlock:

pets = DataBlock(blocks = (ImageBlock, CategoryBlock),
get_items=get_image_files,
splitter=RandomSplitter(seed=42),
get_y=using_attr(RegexLabeller(r’(.+)_\d+.jpg$’), ‘name’),
item_tfms=Resize(460),
batch_tfms=aug_transforms(size=224, min_scale=0.75)
)

dls = pets.dataloaders(path/“images”)

When I do that, I get the folowing error.

“AttributeError: ‘NoneType’ object has no attribute ‘mean’”

Could you pose the stack trace or is it only the one line “Your generator is empty”?

It seems that I needed to pass a dl in .get_preds() such as learn_inf.get_preds(dl=dls) to avoid the “generator is empty” error. When I do that I get another error “AttributeError: ‘NoneType’ object has no attribute ‘mean’”.

Full trace …


RuntimeError Traceback (most recent call last)
/opt/conda/envs/fastai/lib/python3.8/site-packages/fastai/learner.py in with_events(self, f, event_type, ex, final)
154 def with_events(self, f, event_type, ex, final=noop):
–> 155 try: self(f’before
{event_type}’) ;f()
156 except ex: self(f’after_cancel
{event_type}’)

/opt/conda/envs/fastai/lib/python3.8/site-packages/fastai/learner.py in _do_one_batch(self)
163 def _do_one_batch(self):
–> 164 self.pred = self.model(*self.xb)
165 self(‘after_pred’)

/opt/conda/envs/fastai/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
721 else:
–> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(

/opt/conda/envs/fastai/lib/python3.8/site-packages/torch/nn/modules/container.py in forward(self, input)
116 for module in self:
–> 117 input = module(input)
118 return input

/opt/conda/envs/fastai/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
721 else:
–> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(

/opt/conda/envs/fastai/lib/python3.8/site-packages/torch/nn/modules/container.py in forward(self, input)
116 for module in self:
–> 117 input = module(input)
118 return input

/opt/conda/envs/fastai/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
721 else:
–> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(

/opt/conda/envs/fastai/lib/python3.8/site-packages/torch/nn/modules/conv.py in forward(self, input)
418 def forward(self, input: Tensor) -> Tensor:
–> 419 return self._conv_forward(input, self.weight)
420

/opt/conda/envs/fastai/lib/python3.8/site-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight)
414 _pair(0), self.dilation, self.groups)
–> 415 return F.conv2d(input, weight, self.bias, self.stride,
416 self.padding, self.dilation, self.groups)

RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same

During handling of the above exception, another exception occurred:

AttributeError Traceback (most recent call last)
in
----> 1 learn_inf.get_preds(dl=dls)

/opt/conda/envs/fastai/lib/python3.8/site-packages/fastai/learner.py in get_preds(self, ds_idx, dl, with_input, with_decoded, with_loss, act, inner, reorder, cbs, n_workers, **kwargs)
233 if with_loss: ctx_mgrs.append(self.loss_not_reduced())
234 with ContextManagers(ctx_mgrs):
–> 235 self._do_epoch_validate(dl=dl)
236 if act is None: act = getattr(self.loss_func, ‘activation’, noop)
237 res = cb.all_tensors()

/opt/conda/envs/fastai/lib/python3.8/site-packages/fastai/learner.py in _do_epoch_validate(self, ds_idx, dl)
186 if dl is None: dl = self.dls[ds_idx]
187 self.dl = dl
–> 188 with torch.no_grad(): self._with_events(self.all_batches, ‘validate’, CancelValidException)
189
190 def _do_epoch(self):

/opt/conda/envs/fastai/lib/python3.8/site-packages/fastai/learner.py in with_events(self, f, event_type, ex, final)
153
154 def with_events(self, f, event_type, ex, final=noop):
–> 155 try: self(f’before
{event_type}’) ;f()
156 except ex: self(f’after_cancel
{event_type}’)
157 finally: self(f’after_{event_type}’) ;final()

/opt/conda/envs/fastai/lib/python3.8/site-packages/fastai/learner.py in all_batches(self)
159 def all_batches(self):
160 self.n_iter = len(self.dl)
–> 161 for o in enumerate(self.dl): self.one_batch(*o)
162
163 def _do_one_batch(self):

/opt/conda/envs/fastai/lib/python3.8/site-packages/fastai/learner.py in one_batch(self, i, b)
177 self.iter = i
178 self._split(b)
–> 179 self._with_events(self._do_one_batch, ‘batch’, CancelBatchException)
180
181 def _do_epoch_train(self):

/opt/conda/envs/fastai/lib/python3.8/site-packages/fastai/learner.py in with_events(self, f, event_type, ex, final)
155 try: self(f’before
{event_type}’) ;f()
156 except ex: self(f’after_cancel_{event_type}’)
–> 157 finally: self(f’after_{event_type}’) ;final()
158
159 def all_batches(self):

/opt/conda/envs/fastai/lib/python3.8/site-packages/fastai/learner.py in call(self, event_name)
131 def ordered_cbs(self, event): return [cb for cb in sort_by_run(self.cbs) if hasattr(cb, event)]
132
–> 133 def call(self, event_name): L(event_name).map(self._call_one)
134
135 def _call_one(self, event_name):

/opt/conda/envs/fastai/lib/python3.8/site-packages/fastcore/foundation.py in map(self, f, *args, **kwargs)
381 else f.format if isinstance(f,str)
382 else f.getitem)
–> 383 return self._new(map(g, self))
384
385 def filter(self, f, negate=False, **kwargs):

/opt/conda/envs/fastai/lib/python3.8/site-packages/fastcore/foundation.py in _new(self, items, *args, **kwargs)
331 @property
332 def _xtra(self): return None
–> 333 def _new(self, items, *args, **kwargs): return type(self)(items, *args, use_list=None, **kwargs)
334 def getitem(self, idx): return self._get(idx) if is_indexer(idx) else L(self._get(idx), use_list=None)
335 def copy(self): return self._new(self.items.copy())

/opt/conda/envs/fastai/lib/python3.8/site-packages/fastcore/foundation.py in call(cls, x, args, **kwargs)
45 return x
46
—> 47 res = super().call(
((x,) + args), **kwargs)
48 res._newchk = 0
49 return res

/opt/conda/envs/fastai/lib/python3.8/site-packages/fastcore/foundation.py in init(self, items, use_list, match, *rest)
322 if items is None: items = []
323 if (use_list is not None) or not _is_array(items):
–> 324 items = list(items) if use_list else _listify(items)
325 if match is not None:
326 if is_coll(match): match = len(match)

/opt/conda/envs/fastai/lib/python3.8/site-packages/fastcore/foundation.py in _listify(o)
235 if isinstance(o, list): return o
236 if isinstance(o, str) or _is_array(o): return [o]
–> 237 if is_iter(o): return list(o)
238 return [o]
239

/opt/conda/envs/fastai/lib/python3.8/site-packages/fastcore/foundation.py in call(self, *args, **kwargs)
298 if isinstance(v,_Arg): kwargs[k] = args.pop(v.i)
299 fargs = [args[x.i] if isinstance(x, _Arg) else x for x in self.pargs] + args[self.maxi+1:]
–> 300 return self.fn(*fargs, **kwargs)
301
302 # Cell

/opt/conda/envs/fastai/lib/python3.8/site-packages/fastai/learner.py in _call_one(self, event_name)
135 def _call_one(self, event_name):
136 assert hasattr(event, event_name), event_name
–> 137 [cb(event_name) for cb in sort_by_run(self.cbs)]
138
139 def _bn_bias_state(self, with_bias): return norm_bias_params(self.model, with_bias).map(self.opt.state)

/opt/conda/envs/fastai/lib/python3.8/site-packages/fastai/learner.py in (.0)
135 def _call_one(self, event_name):
136 assert hasattr(event, event_name), event_name
–> 137 [cb(event_name) for cb in sort_by_run(self.cbs)]
138
139 def _bn_bias_state(self, with_bias): return norm_bias_params(self.model, with_bias).map(self.opt.state)

/opt/conda/envs/fastai/lib/python3.8/site-packages/fastai/callback/core.py in call(self, event_name)
42 (self.run_valid and not getattr(self, ‘training’, False)))
43 res = None
—> 44 if self.run and _run: res = getattr(self, event_name, noop)()
45 if event_name==‘after_fit’: self.run=True #Reset self.run to True at each end of fit
46 return res

/opt/conda/envs/fastai/lib/python3.8/site-packages/fastai/learner.py in after_batch(self)
440 if len(self.yb) == 0: return
441 mets = self._train_mets if self.training else self._valid_mets
–> 442 for met in mets: met.accumulate(self.learn)
443 if not self.training: return
444 self.lrs.append(self.opt.hypers[-1][‘lr’])

/opt/conda/envs/fastai/lib/python3.8/site-packages/fastai/learner.py in accumulate(self, learn)
377 def accumulate(self, learn):
378 bs = find_bs(learn.yb)
–> 379 self.total += to_detach(learn.loss.mean())*bs
380 self.count += bs
381 @property

AttributeError: ‘NoneType’ object has no attribute ‘mean’

Hi, Aksel!

This make it work for me:

learn_inf.get_preds(dl=dls.cpu().valid))

So, I had to do two things to make get_preds work.

  1. Your are using dl=dls, but you want to use dl=dls.valid. dls is a DataLoaders, two DataLoaders, not one. get_preds expect one DataLoader.
  2. Also, when you load your learner, it is loaded into CPU, not GPU/cuda, but your dls are in the GPU (I guess because we did that at the beginning of the lesson), so you either move your model to gpu (you can do this with learn_inf.model.cuda()), or you move your dls to the cpu with the cpu() method.

Make sense? Did it work for you too?

1 Like

Yes it works!

Another way I made it work was to pass False to cpu in the load_learner call then pass dls.valid in the get_preds().

learn_inf = load_learner(‘pets.pkl’, cpu=False)

Thanks Andres!

2 Likes

Hello everyone,

I’m trying to apply some inference for image classification on images after using training a model and saving with :
learn.export(’.pkl’)
Then I load a new learner with :
load_learner(pkl, cpu=False)

But then when I’m trying to do learn.predict(tensor)

I get this error :

AttributeError: decode_batch

Did someone already face it ? Or any idea to solve it ?

How did you build your original DataLoaders? With raw torch? Or did you use the fastai DataBlock API

Like this :
def openIMG(path):
path = str(path)
data = geo.Raster().asArray(path)
data = np.reshape(data, [data.shape[2], data.shape[0], data.shape[1]])
tensor = torch.Tensor(data)
return tensor

def createDLS(pathIn, bs, chan, label_func):
db = DataBlock(blocks = (TransformBlock(openIMG), CategoryBlock),
get_items = get_image_files,
get_y = label_func)
ds = db.datasets(pathIn)
dls = DataLoader(ds,
bs=bs,
num_workers = 0)
return dls