I am trying to use
unet_learner for semantic segmentation.
I have imported my data by
learn = unet_learner(data, resnet34, n_out=3,
metrics=[IoU_Back, IoU_Glo, IoU_Tub, Dice],
self_attention=True, act_cls=Mish, opt_func=opt,
Both the CrossEntropyLoss’s weights and inputs are
torch.cuda.LongTensor ( I double checked it).
I would like to know why when I run the above code I get this error:
RuntimeError: Input type (torch.cuda.LongTensor) and weight type (torch.cuda.FloatTensor) should be the same
It looks like that C.E. weights have been casted back to Float.
By “weights,” the error is referring to the parameters of your model, which are floats (as they should be), and not the weights assigned to each class in
nn.CrossEntropyLoss. However, your inputs are longs (i.e., 64-bit integers) and are therefore unable to be processed by the network. To fix this issue, please ensure the input images are converted to floats somewhere in the data pipeline before being passed to the U-Net. If you would like, you can post the code behind your data block, and I would be more than happy to assist you.
Thanks @BobMcDear !!
I have converted images, masks and CrossEntropy weights to
Now I get this error:
ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
RuntimeError: Expected object of scalar type Long but got scalar type Float for argument #2 'target' in call to _thnn_nll_loss2d_forward
I have solved casting the masks to