Image segmentation on new images (lesson 3, beginner)

Hello, a beginner question here. I’m trying to apply the camvid segmentation of lesson 3 on a dashcam (a camera mounted on the windscreen of a car) video but I can’t figure out how to properly input the new frames to a trained model. My goal is to eventually create a video where only cars are visible (all other segments are colored white).

Following the lesson 3 notebook, I’d like to create a new folder ‘data/camvid/test’ with all the dashcam video frames. These images have no lables and I assume they should not have any transformations or normalization done to them(?). The part I get lost at is creating a data bunch with the new images without labels. Could you point me in the right direction?

Doing predictions on new items is surely a frequently asked question, but I couldn’t find an example that matches the segmentation problem. I’m unfortunately still quite new to the library and unable to apply the examples in other types of problems on this one. Apologies for this.

Many thanks!

You can pass a testset directly to the databunch.

Alternatively, you can do it anytime to a already created databunch object:

add_test [source]

add_test ( items : Iterator [ T_co ], label : Any = None )

Add the items as a test set. Pass along label otherwise label them with EmptyLabel .

1 Like

Thanks for the reply! I tried adding the test set (just one item for now), but I get an error message about the missing labels:


It’s not possible to apply those transforms to your dataset: Not implemented: you can’t apply transforms to this type of item (EmptyLabel)

If I just add a random image as a dummy label, it gives no error and seems to be fine. Now I get stuck at running the prediction:

learn.predict(data, is_test=True)

AttributeError: apply_tfms

I think I’m still missing something crucial here

hello @eljas1 , I ran into a similar issue and was able to get around it by adding the test set before the transformations when building the learner.

test_images = ["data/test/{}.png".format(x) for x in range(100)]
src = (SegmentationItemList.from_df(img_df, path='data/train', cols='Image', convert_mode='L').split_none().label_from_func(get_label, classes=codes).add_test(test_images))
data = (src.transform(transforms, size=size, tfm_y=False).databunch(bs=bs).normalize(imagenet_stats)) 

Every time I tried to add them after I would get the tfms error.


Hi @sariabod and thanks for the suggestion! I tried adding it in the initial src and before defining the learner but the same problem still presists with the below code.

src = (SegmentationItemList.from_folder(path_img)
        .label_from_func(get_y_fn, classes=codes)
data = (src.transform(get_transforms(), size=size, tfm_y=True)

It’s not possible to apply those transforms to your dataset: Not implemented: you can’t apply transforms to this type of item (EmptyLabel)

Could we be using different versions of Fastai? I’m running the course version on Colab.

I found a way to do predictions on single images. I could iterate through the test folder images one by one as a back-up plan, though I assume this would be way slower than integrating a test set into the data bunch. The single image prediction code is so simple it makes me feel stupid for not figuring it out earlier. Here’s the three lines to predict and show the segmentation result:

img = open_image(path_test/'car.png')
prediction = learn.predict(img)

Still searching for the right solution for larger scale though.


Hello @eljas1 , I am using 1.0.50.post1, before this I was on 1.0.47 (it was working on this version as well). To get batch predictions on the test set I use:

t_preds = learn.get_preds(DatasetType.Test)

@sariabod Hey, ever find a way to add segmentation test data after the fact? I’m using a SageMaker endpoint for inference so I can’t recreate the learner each time I want to predict on a new dataset.

hello @austinmw , I do not fully understand the issue. If your not building a learner how are you doing inference? Can you share some code or elaborate. I have not used sagemaker before and do not understand the limitations.

@sariabod Hi, With sagemaker you have a model function that loads a model (or in this case learner) when a http endpoint is created, then a predict function to call against it by sending it image batch payloads in real-time. So you don’t want to recreate the learner object every time a batch comes in.

@austinmw this sounds like something you would deploy for production. I have not needed to add a test set on a prod deployment before. Do you have access to the learner? Adding a test set would be as simple as:

Sorry, my sage maker experience is none at this point.

1 Like

Thanks, was having issues adding a segmentation test set with no labels or splits, but I think it was bugged and now fixed on master :grin:

Has anyone found a workable solution that does not process one image at a time? If so, could you share your code? Thanks!!!

I still get the transformation error on adding test set to data object. Could some body help on this.

It says:

Exception: It’s not possible to apply those transforms to your dataset:
Not implemented: you can’t apply transforms to this type of item (EmptyLabel)

My version is:

Hello I have added a test set to the data and trained the image segmentation but when i try showing the results of the predictions it shows batch prediction on images that are not in the test folder. Also how can I get an output mask instead of an visualization labeled prediction. I can get prediction by iteration each image by another in a test folder but I want to do it at all together.

test_preds = learn.get_preds(ds_type=DatasetType.Test)

@sariabod @austinmw