Hello, a beginner question here. I’m trying to apply the camvid segmentation of lesson 3 on a dashcam (a camera mounted on the windscreen of a car) video but I can’t figure out how to properly input the new frames to a trained model. My goal is to eventually create a video where only cars are visible (all other segments are colored white).
Following the lesson 3 notebook, I’d like to create a new folder ‘data/camvid/test’ with all the dashcam video frames. These images have no lables and I assume they should not have any transformations or normalization done to them(?). The part I get lost at is creating a data bunch with the new images without labels. Could you point me in the right direction?
Doing predictions on new items is surely a frequently asked question, but I couldn’t find an example that matches the segmentation problem. I’m unfortunately still quite new to the Fast.ai library and unable to apply the examples in other types of problems on this one. Apologies for this.
hello @eljas1 , I ran into a similar issue and was able to get around it by adding the test set before the transformations when building the learner.
test_images = ["data/test/{}.png".format(x) for x in range(100)]
src = (SegmentationItemList.from_df(img_df, path='data/train', cols='Image', convert_mode='L').split_none().label_from_func(get_label, classes=codes).add_test(test_images))
data = (src.transform(transforms, size=size, tfm_y=False).databunch(bs=bs).normalize(imagenet_stats))
Every time I tried to add them after I would get the tfms error.
Hi @sariabod and thanks for the suggestion! I tried adding it in the initial src and before defining the learner but the same problem still presists with the below code.
I found a way to do predictions on single images. I could iterate through the test folder images one by one as a back-up plan, though I assume this would be way slower than integrating a test set into the data bunch. The single image prediction code is so simple it makes me feel stupid for not figuring it out earlier. Here’s the three lines to predict and show the segmentation result:
Hello @eljas1 , I am using 1.0.50.post1, before this I was on 1.0.47 (it was working on this version as well). To get batch predictions on the test set I use:
@sariabod Hey, ever find a way to add segmentation test data after the fact? I’m using a SageMaker endpoint for inference so I can’t recreate the learner each time I want to predict on a new dataset.
hello @austinmw , I do not fully understand the issue. If your not building a learner how are you doing inference? Can you share some code or elaborate. I have not used sagemaker before and do not understand the limitations.
@sariabod Hi, With sagemaker you have a model function that loads a model (or in this case learner) when a http endpoint is created, then a predict function to call against it by sending it image batch payloads in real-time. So you don’t want to recreate the learner object every time a batch comes in.
@austinmw this sounds like something you would deploy for production. I have not needed to add a test set on a prod deployment before. Do you have access to the learner? Adding a test set would be as simple as: learn.data.add_test(test_images)
Sorry, my sage maker experience is none at this point.
Hello I have added a test set to the data and trained the image segmentation but when i try showing the results of the predictions it shows batch prediction on images that are not in the test folder. Also how can I get an output mask instead of an visualization labeled prediction. I can get prediction by iteration each image by another in a test folder but I want to do it at all together.