Looking good @VishnuSubramanian . A few suggestions:
- The final layer of resnet is not fully connected, but is a global average pooling. So I suggest you change that comment. It’s fine to replace the global average pooling layer with a fully connected layer, as you have here. You may however find you get better results in transfer learning by fine-tuning all the fully connected layers of VGG. Perhaps you could try both and see which work better?
- I think people will be most interested in seeing how well your approach works. Perhaps you could show how long it takes to train, and how accurate it is. And maybe show some examples of the images and predicted labels.
- In the course we save a lot of training time by pre-computing the penultimate layer’s features, and then just training the fully connected layer(s). It might be worth showing how to do that too, and show the impact on time. Maybe that’s for a 2nd blog post!