Image Segmentation Inference

Hello, I’ve been looking for a solution to do inference using UNet (segmentation). Sadly, I can’t find anything simple and straightforward. Is it that hard to do it? I just want to load my model and do inference in a custom dataset

However, everything I find uses an ImageBunch which requires the masks.

data = (SegmentationItemList.from_folder(path_img)
        .split_by_rand_pct()
        .label_from_func(get_y_fn, classes=codes)
        .transform(get_transforms(), tfm_y=True, size=256)
        .databunch(bs=16)
        .normalize(imagenet_stats))

(from the Inference tutorial).

Any idea how to do inference on a specific folder without passing the masks (which doesn’t make any sense)

Regards,

Luís

Does this work?

data = (SegmentationItemList.from_folder(path)
                            .split_none()
                            .label_empty()
                            .transform(size=256)
                            .databunch(bs=16)
                            .normalize(imagenet_stats))
1 Like

I did this:

data = (SegmentationItemList.from_folder('../data2/test/166144_test_001/patches')
                            .split_none()
                            .label_empty()
                            .transform(size=256)
                            .databunch(bs=16)
                            .normalize(imagenet_stats))

and then this:

learn = unet_learner(data, models.resnet34).load('20190708-rn38unet-1224')

And I got the following error:


AttributeError Traceback (most recent call last)
in
----> 1 learn = unet_learner(data, models.resnet34).load(‘20190708-rn38unet-1224’)
2 #learn.export()

/usr/local/lib/python3.6/dist-packages/fastai/vision/learner.py in unet_learner(data, arch, pretrained, blur_final, norm_type, split_on, blur, self_attention, y_range, last_cross, bottle, cut, **learn_kwargs)
114 meta = cnn_config(arch)
115 body = create_body(arch, pretrained, cut)
–> 116 model = to_device(models.unet.DynamicUnet(body, n_classes=data.c, blur=blur, blur_final=blur_final,
117 self_attention=self_attention, y_range=y_range, norm_type=norm_type, last_cross=last_cross,
118 bottle=bottle), data.device)

/usr/local/lib/python3.6/dist-packages/fastai/basic_data.py in getattr(self, k)
120 return cls(*dls, path=path, device=device, dl_tfms=dl_tfms, collate_fn=collate_fn, no_check=no_check)
121
–> 122 def getattr(self,k:int)->Any: return getattr(self.train_dl, k)
123 def setstate(self,data:Any): self.dict.update(data)
124

/usr/local/lib/python3.6/dist-packages/fastai/basic_data.py in getattr(self, k)
36
37 def len(self)->int: return len(self.dl)
—> 38 def getattr(self,k:str)->Any: return getattr(self.dl, k)
39 def setstate(self,data:Any): self.dict.update(data)
40

/usr/local/lib/python3.6/dist-packages/fastai/basic_data.py in DataLoader___getattr__(dl, k)
18 torch.utils.data.DataLoader.init = intercept_args
19
—> 20 def DataLoader___getattr__(dl, k:str)->Any: return getattr(dl.dataset, k)
21 DataLoader.getattr = DataLoader___getattr__
22

/usr/local/lib/python3.6/dist-packages/fastai/data_block.py in getattr(self, k)
637 res = getattr(y, k, None)
638 if res is not None: return res
–> 639 raise AttributeError(k)
640
641 def setstate(self,data:Any): self.dict.update(data)

AttributeError: c

Kind regards

learn = load_learner('my_learner')

# Now get the preds
learn.get_preds(data.train_ds.x[0])

You get the error because when you export/save your learner you are also saving the databunch information along with it (like transforms which is uses). So you cannot create a learner with different databunch (like when you create empty_label one) and then load the previous learner object to it.

It makes sense. Thank you.

It still doesn’t work though. I had a custom loss metric in my model.

def dice_loss(input, target):
#     pdb.set_trace()
    smooth = 1.
    input = input[:,1,None].sigmoid()
    iflat = input.contiguous().view(-1).float()
    tflat = target.view(-1).float()
    intersection = (iflat * tflat).sum()
    return (1 - ((2. * intersection + smooth) / ((iflat + tflat).sum() +smooth)))

def combo_loss(pred, targ):
    bce_loss = CrossEntropyFlat(axis=1)
    return bce_loss(pred,targ) + dice_loss(pred,targ)

Therefore, I get this error:


AttributeError Traceback (most recent call last)
in
----> 1 learn = load_learner(’…/data2/segmentation/images’)
2 #learn.export()

/usr/local/lib/python3.6/dist-packages/fastai/basic_train.py in load_learner(path, file, test, **db_kwargs)
608 “Load a Learner object saved with export_state in path/file with empty data, optionally add test and load on cpu. file can be file-like (file or buffer)”
609 source = Path(path)/file if is_pathlike(file) else file
–> 610 state = torch.load(source, map_location=‘cpu’) if defaults.device == torch.device(‘cpu’) else torch.load(source)
611 model = state.pop(‘model’)
612 src = LabelLists.load_state(path, state.pop(‘data’))

/usr/local/lib/python3.6/dist-packages/torch/serialization.py in load(f, map_location, pickle_module, **pickle_load_args)
384 f = f.open(‘rb’)
385 try:
–> 386 return _load(f, map_location, pickle_module, **pickle_load_args)
387 finally:
388 if new_fd:

/usr/local/lib/python3.6/dist-packages/torch/serialization.py in _load(f, map_location, pickle_module, **pickle_load_args)
571 unpickler = pickle_module.Unpickler(f, **pickle_load_args)
572 unpickler.persistent_load = persistent_load
–> 573 result = unpickler.load()
574
575 deserialized_storage_keys = pickle_module.load(f, **pickle_load_args)

AttributeError: Can’t get attribute ‘combo_loss’ on <module ‘main’>

Any idea how to sort this?

Regards

You must already have the code for your custom loss before loading the learner. Fastai does not serialize the code, only the objects.

Thank you very much! I’m really close to fix this.

However, I tried this:

learn.get_preds(data.train_ds.x[2])

But it does not seem to wor, returning:

‘DatasetType’ object has no attribute ‘data’

Why double bracket in get_preds

Ahah, was just a type. Not the reason of the problem.

I could make it work with

learn.predict(data.train_ds.x[2])

Still, I would like to run the whole batch

Use get_preds function of learner.

I did what you suggested, using:

learn.get_preds(data.train_ds.x[2])

but it doesn’t work

‘DatasetType’ object has no attribute ‘data’

For getpreds you only pass dataset. You can check the docs for complete guide.