VGG16 extract 4096 feature vector


I am trying to extract the 4096 dense activations from the VGG16 model, my current understanding of what I would get is a 4096 characters long string but I can’t really find any info on this. What I tried is this code:

What I tried is:

  1. Add the VGG16 weights to the model and use Top=False
  2. Add the VGG16 weights to the model and use Top=False and add model.layers.pop 2x to the model.
  3. Add the VGG16 weights to the model and use Top=True and add model.layers.pop 2x to the model.

Which gave these results:


The 3rd results looked the best to me because it looks like it 1 long string, but it still has spaces in it which look a lot like the output you would get if you just used VGG16 output as is which I have here:

Can anyone help me get the correct 4096 feature?

Note: I used 1 image to get these features and it was the same image.

Thank you !

If you use VGG16 with an input shape of (224, 224, 3) and include_top=False, then the last layer has shape 7 x 7 x 512. That means your call to model.predict(...) returns an array of that shape. You can call flatten() on this array to turn it into a vector of 7x7x512 = 25088 numbers. That doesn’t sound like it is what you want.

If you use include_top=True and call pop twice, then the last layer in the model is fc1, and model.predict() returns a vector with 4096 numbers in it. But these are 32-bit floating point numbers, not characters. If you print out this vector, then you’ll see 4096 numbers with spaces between them.

So I’m not sure what you expected to see here, or what you want to do with these 4096 numbers, but the output you’re seeing is correct.

1 Like

Hey @machinethink
Thanks for answering, the last one where iT outputs the floating points is correct then. What I am trying to do is use these features in a KNN classifier, but I dont see how to use this outputs right now because of the spaces and . In these Numbers. If you could clarify that, that would be great.

Regards Gertjan

The data is inside a numpy array, which is just an array of numbers. The only reason you see the spaces is because you save these numbers as a text file.

If you use np.tofile(bottleneck_features_train) then the data gets saved as 32-bit floating point numbers. (It will be a file of size 16384 bytes since each floating point number is 4 bytes.)

Hey @machinethink
I used bottleneck_features_train.tofile(‘test.txt’) and it does indeed save but it looks very weird when I open it. It also is only 4kb and you said it would be 16kb. Do you know what is going on here?

The output if I open it with Gedit (editor in Ubuntu) :

Well, it depends on which layer you save from. The classification layer has 1024 elements, so that would give you 1024 x 4 = 4096 bytes = 4kb. Which layer did you get the bottleneck features from?

The reason it looks weird is because it’s binary data. Gedit is trying to show it to you as text, but that doesn’t make sense.

I’m not sure what you’re expecting to see?

Hey, I used top=true and then did model.layers.pop() twice, so I would get a 4096 vector.
I thought i would get 4096 numbers in a row, so I can use it in my KNN

What format does the KNN expect the numbers in? As a numpy array?

I don’t know, I am trying to replicate a paper. Last week a group of people published that an AI could detect if you were gay. They used the 4096 layer of VGG16 and then used a logistic regression. And a lot of other papers also use VGG16 and then put that in a KNN but I can’t seem to find any explanation online on how to do this. But I figured it would just be long numbers .

This is the KNN classifier I want to use:

I am working on the same pattern as yours for feature extraction. I have a text file in which I have stored the features. Can you please verify it by comparing with your own feature vector that are those the right features?
If you can help please give me the feedback.