Lesson 7 superres, how is the change in data image size handled?

After training on an initial databunch, it’s shown that the resolution of the image is doubled

data = get_data(12,size*2)
learner.data = data

The learner is then able to train and infer at the new resolution. I’m confused how the resolution size change was handled. Is the underlying model changed at all? I’ve heard adaptive pooling mentioned before, is it something to do with that?

I tried inferring directly after changing the learner.data to the higher res and the inference gives good results straight away without extra training. How does the model know how to infer well at the higher res without examples? I’m guessing it’s somehow splitting the image up into smaller patches that it’s able to handle with the existing model and then reassembling?

Edit: also, is this happening in the model itself or in the learner?