My guess is that this architecture should work well for the iceberg competition. The pretrained weights may be a little helpful, but I’m not sure either way…
(You could easily change the resnext definition to handle 2 channel input directly BTW. Would be a good exercise.)
Hey @kcturgutlu got something much better now! Try from fastai.models.cifar10.senet import SENet18 and grab the weights from http://files.fast.ai/models/sen_32x32_8.h5 . This is the 'squeeze and excitation network` that won Imagenet this year, trained on Cifar10 to 94.1% accuracy
Hi @jeremy I know that we can’t have floating padding and you also recommended me to use original image size as sz. In this case we have 75x75 images and pad = sz//8. Should I change padding or sz ? Which makes more sense and why ? Thank you so much again for the amazing update in the library
Edit: It works with data = get_data(75, 32) but couldn’t understand how pad 75//8 is allowed.
@jeremy this model uses 10 different categories.
If I’m trying do the dog breed or the satellite images or any other one.
The number of categories will change.
What is the best way to fix the number of categories here to fit to the new model.
I remember in Keras you pop the last layer and you add a new fully connected layer with the number of categories needed.
fastai and pyorch should have something similar
Can you shed some light here?
@gerardo number of classes is an argument of the resnext29_… methods (by default it’s set to 10). See for example definition of resnext29_16_64(num_classes=10) in fastai/models/cifar10/resnext.py.
fastai will do it for you automagically The whole popping of the last layer and slapping a layer on top that has the correct dimensionality (aligned with the amount of classes in your dataset)