I took some time off work this week to go over the contents of the class, especially the code I didn’t have a chance to really seriously dive into. I decided to explore the ResNet model. I learned that to do this, first I had to save some VGG weights with which to build ResNet.
So I looked back at Lesson 2. I pulled out both the Lesson2.ipynb and the redux.ipynb. Both of them have the basic VGG model.
Starting with the lesson2.ipynb, everything is going great, the model is fine. The prediction are made here, and I believe should have a shape of (22500, 1000). I’m trying to return the basic ImageNet-style results, which consist of 1000 classes. But after
trn_features = model.predict(trn_data, batch_size=batch_size)
val_features = model.predict(val_data, batch_size=batch_size)
which gives me
(22500, 512, 7, 7)
Clearly, after the final conv layer, I’ve got 512 channels of 7x7. But I need to have the normal ImageNet output of 1000 classes, so I can send it into this linear model:
lm = Sequential([ Dense(2, activation='softmax', input_shape=(1000,)) ])
Why is the vanilla VGG no longer returning 1000 classes? Has the VGG class changed so much between weeks 2 and 7? I’ve rewatched tons of different parts of the videos, but most likely, I’ve absolutely completely missed something (probably fundamental).
Thanks for your time.