Of late it is emerging that Deep CNN s including those trained on Imagenet are biased towards texture and give little attention to shape if at all. This is one of the main ways computer vision at current stage differs from human vision.
Can anyone speculate on ways to make deep CNNs more shape aware by way of augmentations during training?
There’s this paper:
They have special versions of ImageNet and pretrained models to help prevent this problem.
Where can we download the shape biased pre-trained models from?
Ok got them on their repo. Is there a way to load these models on fastai?
Any pytorch model can be passed into the
Learner object. Or if you want to do discriminative learning rates and frozen training for transfer learning, you can pass it into the cnn_learner object, but also with the splits and cuts.
There should be a lot of examples for using your own model in fastai but looking at the source code to see how fastai loads in models like ResNets would be a good place to start. They are under
I need to do further discriminative training. So Learner wont be suitable. I wanted to create a custom head and replace the existing one with this head. However, the trained model seem to be saved without cuts. So list(children(model) ) produces the entire model. Is there a way to replace the final layer by individually addressing the layers?
Added later: The model is a torch.nn.parallel.data_parallel.DataParallel