How the cropping helps me with the large size model

i dont understand

in the notebook jermey use the crop generator and training his model
then he create a large size model.
and it seems the large size continuing from where the smaller model stoped.

it doesnt show how and what is the benefit to train a crop model
if the weights will be different to the full size model.

Sorry, I am not fully aware what you are referring to - I am not sure I reached that far in the lectures.

I think you might be talking about fully convolutional models? The ones that accept different sized images?

By bigger model you mean one that takes bigger images? It could be that the model is the same, but we run it on smaller images (smaller version of image = contains less information). We could do this as a mean of controlling overfitting.

Could you link to the relevant part of the notebook? Not sure if I will be able to help as I have not reached that part yet likely but maybe together we can figure this out :slight_smile:

Are you talking about the Tiramisu model? Crops there serve as a regularization/augmentation technique. They allow to generate multiple images from the same image.

but it generate smaller model. he uses diff models to train and to show results.
the cropping makes the image smaller. how can the model predict a full size image after that?