As the fastai book suggested in chapter 7, I tried to train a xrenet18() from scratch for a classification task with two classes. So I made my dls and passed the dls and model to Learner(). It trained and the results were good but when I checked the learn.model to see the architecture, I found out that my last layer is producing 1000 outputs (the ImageNet num of classes)! That was weird and I thought maybe the implementation of Learner has changed so that it no longer adapts the model to dls we provide, because in the book it seems like a good manner to use Learner and a model with different num of outputs.
This could be problematic because no error occurs during training and you probably won’t notice it. I found it because I wanted to use Grad CAM and checked the number of outputs.
Yes, learner does not as it doesn’t apply a custom head. You pass in the model, you must prepare the model in it’s entirety. You need to declare a c_out (or something along those lines) to xresnet
Yeah you are completely right. But I just followed what book had done and in the book they didn’t define any num of outputs and I thought it’s okay. BTW, thank you for making it clear
While Learner itself doesn’t modify the model there is a factory method cnn_learner(...) that does, you pass the number of outputs there and it will attach the head for you: