By “weights,” the error is referring to the parameters of your model, which are floats (as they should be), and not the weights assigned to each class in nn.CrossEntropyLoss. However, your inputs are longs (i.e., 64-bit integers) and are therefore unable to be processed by the network. To fix this issue, please ensure the input images are converted to floats somewhere in the data pipeline before being passed to the U-Net. If you would like, you can post the code behind your data block, and I would be more than happy to assist you.
Thanks @BobMcDear !!
I have converted images, masks and CrossEntropy weights to torch.cuda.FloatTensor.
Now I get this error:
ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
RuntimeError: Expected object of scalar type Long but got scalar type Float for argument #2 'target' in call to _thnn_nll_loss2d_forward