Lesson3 - Segmentation on new images error

Hi everyone, I’m trying to use the U-Net model trained on camvid to segment new images of urban scenes. I got a strange error so to check it wasn’t to do with the images I am using, I tried to use learn.predict on one of the images from the camvid data set. I get the same error in both instances.

My code is:

learn.export(path/‘export.pkl’)
defaults.device = torch.device(‘cpu’)
img = open_image(path/‘0001TP_006690.png’)
learn = load_learner(path)
y=learn.predict(img)

I get this error:

Blockquote


RuntimeError Traceback (most recent call last)
in
----> 1 y=learn.predict(img)
2 y

~\Anaconda3\envs\fastai\lib\site-packages\fastai\basic_train.py in predict(self, item, **kwargs)
363 “Return predicted class, label and probabilities for item.”
364 batch = self.data.one_item(item)
–> 365 res = self.pred_batch(batch=batch)
366 pred,x = grab_idx(res,0),batch[0]
367 norm = getattr(self.data,‘norm’,False)

~\Anaconda3\envs\fastai\lib\site-packages\fastai\basic_train.py in pred_batch(self, ds_type, batch, reconstruct)
343 cb_handler = CallbackHandler(self.callbacks)
344 xb,yb = cb_handler.on_batch_begin(xb,yb, train=False)
–> 345 preds = loss_batch(self.model.eval(), xb, yb, cb_handler=cb_handler)
346 res = _loss_func2activ(self.loss_func)(preds[0])
347 if not reconstruct: return res

~\Anaconda3\envs\fastai\lib\site-packages\fastai\basic_train.py in loss_batch(model, xb, yb, loss_func, opt, cb_handler)
24 if not is_listy(xb): xb = [xb]
25 if not is_listy(yb): yb = [yb]
—> 26 out = model(*xb)
27 out = cb_handler.on_loss_begin(out)
28

~\Anaconda3\envs\fastai\lib\site-packages\torch\nn\modules\module.py in call(self, *input, **kwargs)
487 result = self._slow_forward(*input, **kwargs)
488 else:
–> 489 result = self.forward(*input, **kwargs)
490 for hook in self._forward_hooks.values():
491 hook_result = hook(self, input, result)

~\Anaconda3\envs\fastai\lib\site-packages\fastai\layers.py in forward(self, x)
153 for l in self.layers:
154 res.orig = x
–> 155 nres = l(res)
156 # We have to remove res.orig to avoid hanging refs and therefore memory leaks
157 res.orig = None

~\Anaconda3\envs\fastai\lib\site-packages\torch\nn\modules\module.py in call(self, *input, **kwargs)
487 result = self._slow_forward(*input, **kwargs)
488 else:
–> 489 result = self.forward(*input, **kwargs)
490 for hook in self._forward_hooks.values():
491 hook_result = hook(self, input, result)

~\Anaconda3\envs\fastai\lib\site-packages\torch\nn\modules\container.py in forward(self, input)
90 def forward(self, input):
91 for module in self._modules.values():
—> 92 input = module(input)
93 return input
94

~\Anaconda3\envs\fastai\lib\site-packages\torch\nn\modules\module.py in call(self, *input, **kwargs)
487 result = self._slow_forward(*input, **kwargs)
488 else:
–> 489 result = self.forward(*input, **kwargs)
490 for hook in self._forward_hooks.values():
491 hook_result = hook(self, input, result)

~\Anaconda3\envs\fastai\lib\site-packages\torch\nn\modules\conv.py in forward(self, input)
318 def forward(self, input):
319 return F.conv2d(input, self.weight, self.bias, self.stride,
–> 320 self.padding, self.dilation, self.groups)
321
322

RuntimeError: Expected object of scalar type Float but got scalar type Half for argument #2 ‘weight’

Blockquote

Can anyone help me figure out what I’m doing wrong?

The problem is with your image I think. You will have to create a databunch and then pass the image through it.
Or you can use out = learn.model(img)

Hi, thanks for your reply. I tried what you suggested (y = learn.model(img) but I get this:

TypeError: conv2d(): argument ‘input’ (position 1) must be Tensor, not Image

How do I fix this?
Also, how would I create and pass in an unlabelled dataBunch into this trained model?
Sorry for basic questions, it’s all very new and confusing to me.

I forgot to mention. You have to make the image channel first, add a new batch dimension and convert it to tensor. Also, normalize it.

This pytorch tutorial can help you link.