I am learning how to apply fastaiv1 with a different pretrained model and dataset than the one shown in Lesson 1: senet50 from VGGFace2 and face emotion images from an old Kaggle competition.
Here’s a code fragment that appears to work, but I’m both not sure it is completely right and confused by one point…
N_IDENTITY = 8631 # the number of identities in VGGFace2 for which ResNet and SENet are trained semodel = SENet.senet50(num_classes=N_IDENTITY, include_top=True) utils.load_state_dict(semodel,weightspath) def num_features_model(m:nn.Module)->int: "Return the number of output features for a `model`." for l in reversed(flatten_model(m)): if hasattr(l, 'num_features'): return l.num_features body = create_body(semodel, -1) nf = num_features_model(body) * 2 head = create_head(nf, data.c, None, ps=.5) model = nn.Sequential(body, head) learn = ClassificationLearner(data, model, metrics=error_rate) learn.split((model,)) apply_init(model, nn.init.kaiming_normal_) learn.freeze()
The last part is just the body of create_cnn written out, because I could not easily figure out how to adapt the various tables in learner.py to my needs. And did not want to get stuck on that point.
The Learner trains and recognizes well. However, I am confused where the last layer is removed and a new head is attached to the pretrained model.
The original semodel is of class nn.Module. It contains init and forward functions. These reference the last layer. So when that layer is removed, why do these functions still work? And if they are no longer used, what replaces them in the derived model?
I am just starting to delve into the fastai and PyTorch code, so the answer may be “obvious”.