In lesson 1 we (fastai) adapt the resnet34 from 1000 to 2 features to be evaluates using softmax.
I understand the principles but a printout of the models show more changes than i expected ?
The following snippets loads resnet34 by pytorch and using fastai.
——————————————————————————————
arch=resnet34
data = ImageClassifierData.from_paths(PATH, tfms=tfms_from_model(arch, sz))
learn = ConvLearner.pretrained(arch, data, precompute=True)
resnet34Model = torchvision.models.resnet34(pretrained=True)
print("The resnet34Model loaded directly from pytorch:")
print(resnet34Model)
print("\n\nFastai's learn.get_layer_groups():")
print(learn.get_layer_groups())
#it is this statement that compares to print(resnet34Model)
print("\n\nFastai's learn.models.get_layer_groups(False):")
print(learn.models.get_layer_groups(False))
——————————————————————————————
The models are identical expect for the layers close to the output as follows.
Last layers as loaded by pytorch:
(6): Linear(in_features=512, out_features=2, bias=True)
(7): LogSoftmax()
what are the purposes of all the remaining changes ?
Last layers as loaded by fastai: “learn = ConvLearner.pretrained(arch, data, precompute=True)”:
AdaptiveConcatPool2d(
(ap): AdaptiveAvgPool2d(output_size=(1, 1))
(mp): AdaptiveMaxPool2d(output_size=(1, 1))
), Flatten(
)], Sequential(
(0): BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=True)
(1): Dropout(p=0.25)
(2): Linear(in_features=1024, out_features=512, bias=True)
(3): ReLU()
(4): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True)
(5): Dropout(p=0.5)
(6): Linear(in_features=512, out_features=2, bias=True)
(7): LogSoftmax()
)]
The Linear(in_features=512, out_features=2, bias=True) and LogSoftmax() are expected
But I do not understand why the printout shows so many changes