How does a flatten layer output more elements than were passed into it?

I’m referring to lesson 1 notebook for the pets classifier.

From my understanding the output of Flatten here should be: [512]

Whats going on here?

The two pooling layers are actually concatenated. 512 + 512 = 1024.

Okay that makes sense. But isn’t the input of the AdaptiveMaxPool2d the output of AdaptiveAvgPool2d? So its essentially a connection skip like the one being utilised by ResNets?

No, these layers run in parallel. You can see this in the fastai code here: https://github.com/fastai/fastai/blob/8013797e05f0ae0d771d60ecf7cf524da591503c/fastai/layers.py#L176-L184

Basically this does: torch.cat([self.mp(x), self.ap(x)], 1) where mp and ap are the two pooling layers. Notice how both layers work on the same input, x?

Just because they’re shown one after the other in the summary, doesn’t mean that’s how they are connected. :slight_smile:

2 Likes

Okay that makes perfect sense, thank you!

Is this a common practise then? To flatten the final conv layer like this? It’s just the first time I’m seeing this.

I think Jeremy came up with this method. It’s a common thing to do in fastai, anyway. :wink:

Hello,

I am try something pretty similar, would you mind saying in brief exactly what you did to solve the issue and go through?

thanks
iosman

thanks my issue has been fixed.