Inference using load_learner

Thank you very much.
The parameter of size seems not in the tfms list .Could it work for any size of input picture without resize?
When train the model I want to resize the input,but when i test and verify I want it could keep the input size.What can i do?

You can’t batch images if they are not of the same size.

I do test and val without batch. Just one by one. Or you can think that there is not validation set in my model. I just train and then predict(I will computer the metrics by myself). What can i do to set the dataset?

Just create a different databunch for your test data then. There is no point having it in the same databunch as the training set since you don’t want to validate on it.

I get it.Thank you very much!

You can pass tfm_y=False when adding your test data to avoid having those transforms (like resize) applied there but your predictions won’t match your original images (if you resize for instance, they will have the resized size).

Just to clarify, I don’t think this approach works with load_learner().

I have tried the following:

test_src = SegmentationItemList.from_df(test, "myPath/test", cols='im_id')
learn = load_learner("myPath", test=test_src, tfm_y=False)

FastAI still tries to apply transforms to EmptyLabelList.

What’s happening is:

  1. I call load_learner()
  2. load_learner() creates a LabelLists called src with train/valid settings inherited from training
  3. load_learner() calls add_test() (Notably we cannot pass in tfm_y here)
  4. add_test() creates a test set from our valid set
  5. When creating the test, tfm_y is None, so we default to the valid tfm_y which is True

The end result is that we end up trying to apply transforms to the test set.

If I copy and paste load_learner into my own project and modify the call to add_test() to include tfm_y=False it seems to work as I would expect.

1 Like

Yes, that should be an argument in the signature of load_learner. You should make a PR with that change.

Sounds good, I’ll make it later today.

A related question: Why after using load_learner to load an exported trained model, GPU memory is consumed (I am doing inference on my GPU server for now for my experiments)? Is there a way to impose CPU utilization when calling load_learner?
Thank you

There is a flag to force-load your model on the CPU, by default it loads it on the same kind of device as it was saved,

1 Like

If I may also ask an additional question: according to this post: Load_learner on CPU throws "RuntimeError('Attempting to deserialize object on a CUDA)" there is no more a cpu argument (like the one described in the docs: https://docs.fast.ai/basic_train.html#load_learner ), so what is the appropriate way to set this flag?

Oh indeed, you have to change defaults.device to torch.device('cpu').

1 Like

fastai version: 1.0.60.dev0

I try to display test set images with proper predicted labels:

learn = load_learner(modelpath, file='resnet34-acc.82775', 
                     test=ImageList.from_folder(path/'test'))
preds = learn.TTA(ds_type=DatasetType.Test)
learn.show_results(rows=3,ds_type=DatasetType.Test,figsize=(6,6))

renders the following image
image

Quesions:

  1. Image(10,) title – a bug or my fault?

  2. Is there a way to show/specify predicted labels?

Thanks for any hint/advise.

You can’t use show_results with a test dataset, that has no labels.

Has anyone figured out the way to load an unlabled segmentation dataset with load_learner without triggering the tfm_y assertion error?

I am using
learn = load_learner(path_learner, 'export.pkl', test = SegmentationItemList.from_folder($TEST_DIR))

Can I not add a test set using load_learner? Because I’m trying to put the model into production, and would rather not set up the whole databunch and learner if possible, since load_learner is much faster

Right now I am using

for i in range(curr):
    img = open_image(f'{test_dir}/{i}.png')
    img.resize(128)
    mask_pred = learn.predict(img);
    mask_pred[0].save(f'{save_mask_dir}/{i}_mask.png')

Is there a way to get from a predefined learner the model name that this learner is using? For example, a process loads a learner: learn = load_learner(model_path, ‘model.pkl’) and then we use learn to predict with learn.predict . In a subsequent point in time is there a way to retrieve from learn the model name that learn is using? Thanks

If you check the source code of load_learner, it does not store the model’s name – parameter file in the Learner object. However, YOU can!

learn = load_learner(modelpath, file='resnet34-acc.82775', test=ImageList.from_folder(path/'test'))
learn.file = 'resnet34-acc.82775'

Voila!

Thank you for your reply, but it is not working. It gives the AttributeError: ‘LanguageLearner’ object has no attribute ‘file’

You need to assign the value to the file attribute first. Python will then add this attibute to the object. Then it becomes available.

learn = load_learner(modelpath, file='resnet34-acc.82775', 
                     test=ImageList.from_folder(path/'test'))
learn.file = 'resnet34-acc.82775'

print(learn.file)
>>> resnet34-acc.82775

Else, sorry I may have not understood your task.

1 Like

Thank you, I see what you mean. Yes this way it is working, thank you!