from fastai.vision.all import *
from pprint import pprint
pprint(model_meta)
Here’s a snippet of the output from above:
<function resnet18 at 0x7fa1a2104710>: {
'cut': -2,
'split': <function _resnet_split at 0x7fa1a23e77a0>,
'stats': ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
},
<function resnet34 at 0x7fa1a21047a0>: {
'cut': -2,
'split': <function _resnet_split at 0x7fa1a23e77a0>,
'stats': ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
},
<function resnet50 at 0x7fa1a2104830>: {
'cut': -2,
'split': <function _resnet_split at 0x7fa1a23e77a0>,
'stats': ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
},
<function resnet101 at 0x7fa1a21048c0>: {
'cut': -2,
'split': <function _resnet_split at 0x7fa1a23e77a0>,
'stats': ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
},
<function resnet152 at 0x7fa1a2104950>: {
'cut': -2,
'split': <function _resnet_split at 0x7fa1a23e77a0>,
'stats': ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
},
The keys of model_meta are the functions that you would pass into the arch argument of cnn_learner. Not all, but a lot of them are pretrained models (mostly from torchvision) with built in support for fastai style splitting for discriminative learning rates.
Note that the split and cut that you see in model_meta are what’s called internally by cnn_learner. If you’re using models that aren’t supported inherently in fastai, you’d pass in a custom cut and/or split into cnn_learner. Here is one such example.