Tensor size mismatch (Airbus Ship Detection/Segmentation)

I’m trying to apply the segmentation section of Lesson 3 to the Airbus Ship Detection challenge. I’m working with a subset of the data: only images with ships, and only 1000 images for testing speed.

I prepared my data by converting the mask RLE from a provided CSV into individual PNG files. So I end up with 1000 “real” images and 1000 corresponding mask images. At this point, I confirmed that the masks match the actual images and that each image and mask is of size 768x768.

My data preparation notebook is available here, showing that images and masks match, and all image sizes are equal.

I then tried to apply the very basic learning steps:

  • Load data from files with data block API
  • Create learner
  • Find LR
  • Call fit_one_cycle

At this point, fit_one_cycle is failing with:

RuntimeError: The size of tensor a (768) must match the size of tensor b (147456) at non-singleton dimension 1

My learning notebook is available here.

I searched around for this issue, and as the error indicates, it’s usually associated with tensor sizes not matching at some point. Since I confirmed all source images are of the same size, I’m a bit at loss as to where to go from here.

Any pointers on where I should investigate would be greatly appreciated.

The metrics of your notebook contained metrics [accuracy] and for segmentation this should be [dice].