V2 images = questions on how to get predictions properly (multilple sources with diff?

I still have some confusion on the best manner to run predictions as some things shown earlier are no longer working for me…

1 - What is the difference of
a - taking a model that was in training and putting it into eval
b - loading an exported pkl model

in terms of having them do get_preds (i.e. predict on images)?

is the process for inference identical? How do we get the decode info (i.e. predicted label) for the exported model - do wehave to bring along our own script for this etc?

2 - How are we supposed to make the test set? I see 2 ways:

A - like in walkwith fastai2 - just make a function to make a dataloader ala:
def get_dl(fnames:list, bs:int=1):
“Create a DataLoader for inference with a batch size”
dsrc = Datasets(fnames, tfms=[PILImage.create])
after_batch = [IntToFloatTensor(), Normalize.from_stats(*imagenet_stats)]
return dsrc.dataloaders(after_item=[ToTensor()], after_batch=after_batch, bs=bs)

However when I use this it complains about an attribute for the pbar (progress bar missing) or no len:

preds = learn.get_preds(dl=test) where test was made by passing in fnames to get_dl:

fastai2/fastai2/learner.py in call(self, event_name)
23 _run = (event_name not in _inner_loop or (self.run_train and getattr(self, ‘training’, True)) or
24 (self.run_valid and not getattr(self, ‘training’, False)))
—> 25 if self.run and _run: getattr(self, event_name, noop)()
26 if event_name==‘after_fit’: self.run=True #Reset self.run to True at each end of fit

~/fastai2/fastai2/callback/progress.py in begin_validate(self)
25 def begin_train(self): self._launch_pbar()
—> 26 def begin_validate(self): self._launch_pbar()
27 def after_train(self): self.pbar.on_iter_end()
28 def after_validate(self): self.pbar.on_iter_end()

~/fastai2/fastai2/callback/progress.py in _launch_pbar(self)
33 def _launch_pbar(self):
—> 34 self.pbar = progress_bar(self.dl, parent=getattr(self, ‘mbar’, None), leave=False)
35 self.pbar.update(0)

~/anaconda3/lib/python3.7/site-packages/fastprogress/fastprogress.py in init(self, gen, total, display, leave, parent, master)
17 def init(self, gen, total=None, display=True, leave=True, parent=None, master=None):
18 self.gen,self.parent,self.master = gen,parent,master
—> 19 self.total = len(gen) if total is None else total
20 self.last_v = 0
21 if parent is None: self.leave,self.display = leave,display

TypeError: object of type ‘DataLoaders’ has no len()

By contrast, if I load the model up for training and put it into eval mode and then make a test set by passing it to learn.dls.test_dl(test_images)
then things work more as expected.

Thus, for an exported model should we do the same thing as one flipped to eval?

I’m going to go through some of the source code tonight which may answer things better but any feedback here would be appreciated.

@LessW2020 you should make a test_dl() from learn.dls.test_dl and pass in your items. (Look at the other deployment notebook and you’ll see this being done)

1 Like

ok thanks - I thought you had said test_dl was going away and to use more generalizable dls?

Can you link to “other notebook” you are referring to so no confusion :slight_smile:

1 Like

Sure! And regular test_dl did go away for learn.dls.test_dl (at the time that didn’t really exist yet)

1 Like

ok this notebook is very helpful, thanks! I have just one question -looking at this code:

imgs = get_image_files(path)
learn = load_learner(path/export_file_name)
dl = ***test_dl(learn.dls, imgs)***
_, __, preds = learn.get_preds(dl=dl, with_decoded=True)
rm -r 'Downloaded_Images'
resultsFile = open('results.csv', 'wb')
wr = csv.writer(resultsFile)

what is this “test_dl” function? will it apply the validation transforms and will decode pull the vocab for a label or just an int? And are confidence intervals passed back?

Thanks @muellerzr!

test_dl should be learn.dls.test_dl, thought I fixed them all :wink: it will just be a Category or int. You still need to decode it. But it uses your validation transforms

1 Like

ah ok that helps a ton ,thanks!

I did find the test_dl method and I’ll post it here to help clarify:

def test_dl(self:DataLoaders, test_items, rm_type_tfms=None, **kwargs):
"Create a test dataloader from test_items using validation transforms of dls"
test_ds = test_set(self.valid_ds, test_items, rm_tfms=rm_type_tfms) if isinstance(self.valid_ds, Datasets) else test_items
return self.valid.new(test_ds, **kwargs)
In [ ]:
dsets = Datasets(range(8), [[_Tfm(),_Tfm1()]], splits=[[1,2,5,7],[0,3,4,6]])
dls = dsets.dataloaders(bs=4, device=torch.device(‘cpu’))
In [ ]:
dsets = Datasets(range(8), [[_Tfm(),_Tfm1()]], splits=[[1,2,5,7],[0,3,4,6]])
dls = dsets.dataloaders(bs=4, device=torch.device(‘cpu’))
tst_dl = dls.test_dl([2,3,4,5])
test_eq(tst_dl._n_inp, 1)
test_eq(list(tst_dl), [(tensor([ 4, 6, 8, 10]),)])
#Test you can change transforms
tst_dl = dls.test_dl([2,3,4,5], after_item=add1)
test_eq(list(tst_dl), [(tensor([ 5, 7, 9, 11]),)])

one other question if you have time - what is rm_tfms=rm_type_tfms?

I see this mysterious rm_type_tfms all over the test code but never a comment explaining it?

rm_type_tfms will remove some of the transforms in your dataloader. I think 0 is item transforms and 1 is batch transforms, while 2 is both. But I’m unsure. @sgugger?

1 Like

ok that also helps a bunch also, ,thanks @muellerzr!
My assumption was it meant remove transform but unclear how to use…so the idea is we can peel off certain transform that might muck with testing ala rotation? though normally I don’t have or see anyone using transforms for validation but I suppose if they are using TTA.

btw - how would we bring along our vocab to decode preds then for deployment?
Just save dls.vocab.o2i from training and then bring it along and remap it after predictions? will the class order always match for an exported learner?

and I think my very last question - are the preds always confidence intervals? in once case I saw my preds did not actually sum to 1 so wasn’t sure if I was somehow getting loss back instead.

I’ll try and postt my deploy code once it’s finalized to help provide some summary from all my questions :slight_smile:

You should be able to do learn.dls.vocab when you port your learner in but I’m pretty sure. I know we can do decoded=true but not sure on its functionality.

IIRC yes, unless you pass in decoded I think it gives the categories. What situation did they not sum to 1 for each item?

ok I’ll test it tomorrow re: learn.dls.vocab on an exported learner.
Re: not sum to 1 - last week when I was trynig to do some inference but I might have passed in a messed up test set.
Today they did all sum to 1 so I think it was a one off from too much experimentation on deploy.

Just wanted to confirm they are confidence intervals and not loss.

Thanks for the help @muellerzr - I’ll put this to use in the morning and see if I can put it all together for a full deploy scenario.

I did go through the source earlier and that helped tie some things togeher (i.e. use .predict for single item which then calls the fuller .get_preds internally…so at least I see the code commonality now which helps things make more sense).

1 Like

No it is to remove type_tfms, and the integer you pass is the number you want to remove.
This should not be necessary anymore as the predict/get_preds methods infer its correct value themselves now.


Got it! Thank you very much :slight_smile:

Thanks very much @sgugger!

ok to answer some of my own questions thanks to the tips above and some debugging today:
1 - An exported model appears to work basically the same as a ‘loaded’ model that was saved.
2 - Use load_learner(path/file) for exported models and of course learn.load(name) for regular models saved during training.
3 - Both .predict and .get_preds ultimately use the same code of setting a callback and running a validation batch:
cb = GatherPredsCallback(with_input=with_input, with_loss=with_loss, **kwargs)

4 - .predict does a lot nicer job of unpacking the relevant details, but also only works on one image.

So full prediction code ends up like this for get_preds:

inference_folder = Path.cwd()/'inference'

#exported model predictions - steps

#get images to run
images = get_image_files(inference_folder);images

#get model name
name = 'exported_model_name.pkl'

#load model with file/path
modelex = autosave_path/name;modelex

#load exported model
learn = load_learner(modelex);learn

#pass in images to create test batch
dl = learn.dls.test_dl(images)

#get preds for batch
pred_tensor, ignored, preds = learn.get_preds(dl=dl, with_decoded=True)

#outputs of above
#tensor([[0.8150, 0.0348, 0.0220, 0.0258, 0.1023]])

#category index

#index into vocab to turn int index into label 

#output full dictionary of category index and label
#{'label1': 0, 'label2': 1, 'label3': 2, 'invalid': 3, 'negative': 4}

results = learn.predict(images[0])
#output of results

#'category_name', tensor(0), tensor([0.8150, 0.0348, 0.0220, 0.0258, 0.1023]))

#loop to spit out formatted results from get_preds
for index,item in enumerate(pred_tensor):
    prediction = learn.dls.categorize.decode(np.argmax(item)).upper()
    confidence = max(item)
    percent = float(confidence)
    print(f"{prediction}   {percent*100:.2f}% confidence.   Image = {learn.dl.items[index].name}")

#get file name(s)

#show tested image(s)

related question - is there a way to list out the transforms at prediction time/ applied to verify what is happening during prediction?
and possibly add transforms as well?
specifically thinking of the need to apply a RatioResize transform as incoming production images would vary in size, vs in training and validation datasets are sometimes all prepped already).

the docs say:
On the validation set, the crop is always a center crop (on the dimension that’s cropped).
Is there a way to override this for test predictions? a center crop in my case my eliminate the very item trying to be classified.
I’m working around by presizing before prediction but again having the ability to control what transforms are used at prediction time would be very handy.

@LessW2020 you might find TTA helpful, or doing squish resizing instead of crop resizing. Or you could try ‘pad’ mode.

1 Like

How do we loop through and show the specific images in the test set?
I see learn.dl.items will give me the list of files that just ran, but I want to view the images exactly as they were shown to the model (i.e. to see if data went missing due to crop resize etc) and to do heatmaps on specific images.
I see learn.dls.valid seems to be live, but when trying to access a specific item within it it wants a b object.
examples that don’t work:
x = learn.dls.valid_ds[idx] #test_ds etc. all fail
show_at(learn.dls.test_ds, idx);

Anyway if anyone could advise how to view the test images as seen by the model that would be a huge help. (learn.show_batch() does show some images but it’s pulling random images and want to see specific images).
I’ll try and trace out learn.show_batch() as next step but any leads/info appreciated!