Lesson 3 - unet_learner Segmentation inference

Hi,

I’ve been trying to understand on how to run my model on new images:

What I’ve gotten so far:

I’ve trained a unet algorithm (here’s the relevant code)

 class SegLabelListCustom(SegmentationLabelList):
        def open(self, fn): return open_mask(fn, div=True)

class SegItemListCustom(SegmentationItemList):
    _label_cls = SegLabelListCustom

src = (SegItemListCustom.from_folder(path_img)
        .random_split_by_pct()
        .label_from_func(get_y_fn, classes=codes))
tfms = get_transforms(flip_vert=True, max_warp=0, max_zoom=1.2, max_lighting=0.3)
data = (src.transform(tfms, size=SIZE, tfm_y=True)
        .databunch(bs=6)
        .normalize(imagenet_stats))

wd=1e-2

def acc_camvid(input, target):
    target = target.squeeze(1)
    return (input.argmax(dim=1)==target).float().mean()

learn = unet_learner(data, models.resnet34, metrics=acc_camvid, wd=wd).to_fp16()

NOTE: size is 512px and I’ve made all training images (and masks) 512px boxes by adding padding on sides if the original aspect ratio was not 1

Now I’m trying to get the model to show results on a new image.

learner = load_learner("dataset/resized_images").to_fp16()
img_folder = Path("/images")
images = get_image_files(img_folder)
r_img, h, w = resize_image(images[1])
result = learner.predict(r_img)

but I get an error

AttributeError: 'numpy.ndarray' object has no attribute 'apply_tfms'

NOTE: I do size matching for each image myself adding padding when necessary before images go through the model

I have 2 questions regarding this error:

  1. What data type should the image be in when using it in the predict method
  2. How do I tell the model that no transforms are necessary since I do pre-processing myself. As I’ve understood from googling around and looking in this forum, the problem is that I’ve exported transforms when exporting the model and they are being applied to all images that go through. Since the only transformations I would like to do is only on training/validation set (like flip images, etc.) I would like to turn any transforms off when running the model inference.

In documentation I found how to correctly pass an image to predict using this:

sim =r_img.astype(np.float32) 
im = Image(torch.from_numpy(sim))
result = learner.predict(im)

So that answers question 1. But here the problem is that running predict method my Python kernel keeps dying without any explanation. I’m running 8GB Nvidia 2070 RTX. checking vRAM usage, it doesn’t seem to be problem (especially since I can train the model with not problem). Is there a good way to debug this issue?

After some further reasearch I’ve found the problem (although silently killing kernel is not the best way to deal with it it seems), Please see github issue for solution