How come resnet doesn't have fully connected layers

So I’m creating cnn from scratch…where I understand full connected layer means…last layer of network has same size of kernel to that of input…

So jeremy said VGG has fully connected layers & it’s kind of slow + heavy

But resnet doesn’t have full connected layers. What does that mean…how to write something without fully connected layers? I thought last step is also mandatory

The problem with a fully-connected layer is that it always expects its input to be a vector of a fixed size. But a convolution layer (or pooling layer) doesn’t care about the size of the input.

So if the entire network is made up of conv / pooling layers, you can more easily use it on images of different sizes. That’s a big reason for why almost no one uses FC layers anymore.

2 Likes

Who says resnet doesn’t have fully connected layers? It has at least one, see here:

if you use the fastai pretrained version with custom head it has 2.

VGG has 3 FC layers in the end but what makes it heavy&slow is that the middle one is huge (4096x4096) and the others have to lead up to and down from that, so there are millions of weights in those final layers. More modern architectures use much smaller FC layers and often only one or two (and yes there are also nets without FCs but resnet is not one of them)

(examples from the torchvision implementations used in fastai)

3 Likes

Note that in this case using a 1x1 conv layer is identical to using a fully-connected layer. So while this particular implementation of ResNet has one, others may use a 1x1 conv here.

(Key is the AdaptiveAvgPool2d layer that precedes it. This reduces the feature map to 1x1.)