Prediction on different size and colored images

I built a mulit-class model to predict cell assay images, using resnet50. For training and testing i had grey scale images and all of them of same size. Actually, they were quite big 1300x1200 something like this. So, i had to split them into small sizes of 320x300 and used them to train the model. I got great accuracy >98%.
Now i have some test data set whcih where all the images are colored and the images are of varying sizes.
My question is

  1. do I need to change the images to gray scale before prediction
  2. do I need to reduce the size of the test images, if yes, how can i do that

Yes, the model learned to classify with greyscale, so for most optimal performance using greyscale images would be preferred.

Inference is less compute-intensive so it may be possible to use the native images. If not, you can divde the images and you may have to somehow aggregate the results from the patches to get a prediction for the large test image.

Thats the problem. It might be difficult to split the images as they are not big enough to get exact non overlapping tiles. Some are too small ,some are only big enoguh to get 1 smaller image out of it and there will be left over part. Unless i do overalpping tiles.

I ran the prediction on original size images( varying one) and it was quick, but 99% images were classified into one category which I believe is wrong. So, I understand as the model was trained on 224 size(used also by resnet50), I have to bring the images down to that level. Correct?

It depends… did you resize the original images or just divide them into patches? Ideally, the structures (ex: cells) in the test images should be similar size as the train images.

For training, all the images were of same size. I divide them and used the divided ones to train the model.
Now theses new images are developed over the time and using different mahcines, so varying in size.
So, far i tried to just use the original size of the images, as i didnt do any processing, just used
Also, just wanted to confirm, the predict() would take care of normalization and data transformation/augmnetation that I did during the training. Correct?
or do i need to that also before i send the images for prediction?

predict() would take care of normalization and necessary data transforms.

Do you know the pixel sizes (i.e. how many microns per pixel) for the images? Are they similar between the train and test set?

Do you mean DPI? the training set has 96 DPI, but this test set has varying more 96 to 150. 40% are 96 dpi and rest is varying from 141 to 150