How do I get a Resnet50 model?

I think this is more of a Python programming problem…

Here’s what I did:

from resnet50 import *
rn = Resnet50()

And the error message is:

TypeError                                 Traceback (most recent call last)
<ipython-input-19-fc432c5e1b78> in <module>()
----> 1 rn.model()

TypeError: __call__() takes at least 2 arguments (1 given)

I checked the file but am still confused about which argument is missing. It seems like both size and include_top have been given default value, so they are not required, is that correct? If yes, then what is missing here?

Thank you!

Have not checked it myself but looks like rn.model represents the Keras Model object (like with the VGG class from the course).

so you could do something like this (refer to lesson 9/10)

from resnet50 import *
rn = Resnet50()
model = rn.model
res_output = model(input)
output = custom_model(res_output)

or you directly fit/finetune/predict with the resnet class from Jeremy

from resnet50 import *
rn = Resnet50(),labels)
1 Like

Hah, it seems that the ResNet50 from keras.applications does not have the .model method, but the one provided by Jerome does. Thank you very much @j.laute!

I haven’t really started part2 yet, so it’s time!

In this Github link, there is no specific notebooks for each individual lesson, as part 1 does. Am I missing somethings?

No, there are not lesson specific notebooks anymore.

You can see early release videos of part 2 and wiki and discussion here

1 Like

Hi @j.laute, I am mimicking the imagenet_process.ipynb to get the ResNet work for me, but now I found myself totally lost. Could you please kindly give me some advise?

I think the idea is pretty much the same as using VGG: we cut the model somewhere in the middle, the first half is the convolutional part which is computational-heavy, and the second part is some dense layers which take a lot of memory but computes fast. Here’s what I have done so far:

I am confused about how to pass the images to the model and save the intermediate results from the first (convolutional) part…

Hi @shushi2000 ,

i would do something like this

this expects the kaggle cats and dogs data in ./data/dogs-cats/train

as you see the performance is abysmal, I think this is due to the choice of the resnet layer (maybe is doesnt contain enough information) or I made a mistake somewhere.

Just updated the file to also include a vgg model, hope that helps ( also shows that the data is loaded correctly ( i had a few doubts when I saw the accuracy with resnet))

file is now here

1 Like

Wow, this is fantastic! Thank you SOOO much @j.laute. I will spend some good hours to study these code.

Hello @j.laute, when I am mimicking your method, I found the dimensions doesn’t quite match, so I am still confused.

This is what I have done so far (in Keras 2 and TF as backend, but I guess that does not matter):

rn_mean = np.array([123.68, 116.779, 103.939], dtype=np.float32).reshape((1,1,3))
inp_resnet = Input((224,224,3))
preproc = Lambda(lambda x: (x - rn_mean)[:, :, :, ::-1])(inp_resnet)
resnet_model = ResNet50(include_top=False, input_tensor=preproc)
res5b_branch2a = resnet_model.get_layer('res5b_branch2a')
last_conv_layer = resnet_model.layers[resnet_model.layers.index(res5b_branch2a)-1].output
resnet_model_conv = Model(inp_resnet, Flatten()(AveragePooling2D((7,7))(last_conv_layer)))

I think this is the same as what @jeremy mentioned in the video.

When I checked the summary of this resnet_model_conv, it says the output shape should be (None, 2048), so I am expecting that if i throw 200 images to this resnet_model_conv, the outcome should have a shape of (200, 2048). But the actual output had a shape of (800, 2048)…

So later on, when I throw this output into the fully_connected model, it turns out that I have 800 rows of input but only 200 labels (images), so the fc model does not work…

I am using only 200 images since I am just testing the method. I am really confused about which part is wrong. If you could please kindly give me a hint, I would highly appreciate it!