I still have some confusion on the best manner to run predictions as some things shown earlier are no longer working for me…
1 - What is the difference of
a - taking a model that was in training and putting it into eval
b - loading an exported pkl model
in terms of having them do get_preds (i.e. predict on images)?
is the process for inference identical? How do we get the decode info (i.e. predicted label) for the exported model - do wehave to bring along our own script for this etc?
2 - How are we supposed to make the test set? I see 2 ways:
A - like in walkwith fastai2 - just make a function to make a dataloader ala:
def get_dl(fnames:list, bs:int=1):
DataLoader for inference with a batch size”
dsrc = Datasets(fnames, tfms=[PILImage.create])
after_batch = [IntToFloatTensor(), Normalize.from_stats(*imagenet_stats)]
return dsrc.dataloaders(after_item=[ToTensor()], after_batch=after_batch, bs=bs)
However when I use this it complains about an attribute for the pbar (progress bar missing) or no len:
preds = learn.get_preds(dl=test) where test was made by passing in fnames to get_dl:
fastai2/fastai2/learner.py in call(self, event_name)
23 _run = (event_name not in _inner_loop or (self.run_train and getattr(self, ‘training’, True)) or
24 (self.run_valid and not getattr(self, ‘training’, False)))
—> 25 if self.run and _run: getattr(self, event_name, noop)()
26 if event_name==‘after_fit’: self.run=True #Reset self.run to True at each end of fit
~/fastai2/fastai2/callback/progress.py in begin_validate(self)
25 def begin_train(self): self._launch_pbar()
—> 26 def begin_validate(self): self._launch_pbar()
27 def after_train(self): self.pbar.on_iter_end()
28 def after_validate(self): self.pbar.on_iter_end()
~/fastai2/fastai2/callback/progress.py in _launch_pbar(self)
33 def _launch_pbar(self):
—> 34 self.pbar = progress_bar(self.dl, parent=getattr(self, ‘mbar’, None), leave=False)
~/anaconda3/lib/python3.7/site-packages/fastprogress/fastprogress.py in init(self, gen, total, display, leave, parent, master)
17 def init(self, gen, total=None, display=True, leave=True, parent=None, master=None):
18 self.gen,self.parent,self.master = gen,parent,master
—> 19 self.total = len(gen) if total is None else total
20 self.last_v = 0
21 if parent is None: self.leave,self.display = leave,display
TypeError: object of type ‘DataLoaders’ has no len()
By contrast, if I load the model up for training and put it into eval mode and then make a test set by passing it to learn.dls.test_dl(test_images)
then things work more as expected.
Thus, for an exported model should we do the same thing as one flipped to eval?
I’m going to go through some of the source code tonight which may answer things better but any feedback here would be appreciated.