How to handle 4K image size?

There is a Kaggle competition with mixed 4K and 3K images link. In short, we have to identify metastatic cancer.

I have these questions:-

  1. How to handle mixed 4K and 3K images?
  2. Resize is an option to make images of equal size. But is it worth it to do for this kind of data?
  3. How to pass the images to conv layers? (with large kernel sizes or something and how would I use transfer learning in this problem as most pretrained Imagenets models would not be suitable here)
1 Like

I don’t participate in that competition, but based on the description of the competition, as well as the description of the dataset (, it seems like examples are patches of 96x96 pixels, and the task is to predict whether there is at least one cancer pixel in the central 32x32 patch.

In general, the question reminded me of the U-Net paper. The original use case for the U-Net model image segmentation for medical images: segmentation of cells in microscopical images. They had huge images too. They were slicing them into pieces such that each piece fits into GPU memory, did inference on each piece separately, then stitched all segmentation masks together into a large one. When splitting images into pieces, they were leaving some borders around each tile to make sure that pixels on the edges of the output segmentation mask have enough context to make a prediction.

It turns out I accidently mixed up my competitions. I was actually referring to the Human Protein Atlas Image Classification challenge.