For me, the key was to wrap the model that I had created in a def because it wants a True or False of whether or not your architecture should be pretrained so for me, I just had to do something like this:
def superres(pretrained=False, **kwargs):
"""
Creating an architecture for super resolution as defined in this paper: http://arxiv.org/abs/1603.08155
Supporting Material: https://cs.stanford.edu/people/jcjohns/papers/fast-style/fast-style-supp.pdf
This is a modification of the x4 architecture because this is designed to go from 25 px to 100 px
instead of 72 px to 288 px which is what is done in the paper
"""
model = nn.Sequential(
nn.Conv2d(3, 64, 5, padding=2),
res_block(64),
res_block(64),
res_block(64),
res_block(64),
nn.ConvTranspose2d(64, 64, 3, stride=2, padding=1, output_padding=1),
nn.ConvTranspose2d(64, 64, 3, stride=2, padding=1, output_padding=1),
nn.Conv2d(64, 3, 5, padding=2)
)
if pretrained:
assert pretrained==False, "Pretrained not currently available" #Leaving this structure in place because this is where the pretrained weights would be placed
return model
Which I found when I checked to see what the different models looked like:
Signature: models.resnet18(pretrained=False, **kwargs)
Source:
def resnet18(pretrained=False, **kwargs):
"""Constructs a ResNet-18 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = ResNet(BasicBlock, [2, 2, 2, 2], **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['resnet18']))
return model
File: ~/anaconda3/envs/fastai/lib/python3.7/site-packages/torchvision/models/resnet.py
Type: function