How is dimensionality dealt with when increasing the size of images during training?

Shouldn’t doubling the length and width of a 3 channel input image increase the number of activations just before the fully connected layer by a factor of 12 (2x2x3channels)?

How does the network use the weights it has trained when the input size increases? I can think of a couple of possibilities (all of which I don’t think are correct):

  1. Changing the stride of the convolutional filters - This will not work for resizing by every factor.
  2. Resizing the input image - This defeats the purpose of using larger images.
  3. Adding an untrained layer with the appropriate number of inputsand outputs before the fully connected layer - This would render the previous training useless.

It was difficult to articulate the question so let me know if I need to clarify it. Thanks!

3 Likes

We’ll be covering this soon. If you want to discuss stuff we haven’t covered yet, please remember to use the advanced category: #part1-v3:part1-v3-adv

2 Likes

Okay thank you! I thought it might have been a simple thing I missed during the lecture.

What you are looking for is called Adaptive Pooling. Below is thread that you can read to understand more about it. And as jeremy said, he will also cover this in class. So, you can have a second go at this concept later when this covered. :wink:

1 Like