I was trying out an image segmentation task. While trying to train the model, I was getting the error:
RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes' failed. at /pytorch/aten/src/THNN/generic/ClassNLLCriterion.c:92
I read some threads here and in an attempt to narrow down the problem, I tried to make a prediction and I noticed a problem. The following is by DataBunch:
ImageDataBunch;
Train: LabelList (116 items)
x: SegmentationItemList
Image (3, 128, 128),Image (3, 128, 128),Image (3, 128, 128),Image (3, 128, 128),Image (3, 128, 128)
y: SegmentationLabelList
ImageSegment (1, 128, 128),ImageSegment (1, 128, 128),ImageSegment (1, 128, 128),ImageSegment (1, 128, 128),ImageSegment (1, 128, 128)
Path: /content/drive/My Drive/AutoBound Data/cleaned/originalImages;
Valid: LabelList (29 items)
x: SegmentationItemList
Image (3, 128, 128),Image (3, 128, 128),Image (3, 128, 128),Image (3, 128, 128),Image (3, 128, 128)
y: SegmentationLabelList
ImageSegment (1, 128, 128),ImageSegment (1, 128, 128),ImageSegment (1, 128, 128),ImageSegment (1, 128, 128),ImageSegment (1, 128, 128)
Path: /content/drive/My Drive/AutoBound Data/cleaned/originalImages;
Test: None
When I make a prediction on a batch (9 items per batch), I get an output of the following size:
torch.Size([9, 2, 128, 128])
The segmented image in my data bunch had the dimensions [1,128,128]
, but the output image has the above dimensions. I believe this was causing the problem. How do I fix this?