I have not tried to get it to work myself, so no real answers on that, but I can try to explain why your approach won’t work:
create_cnn takes a given architecture, cuts of the final layer(s) (head) and attaches a custom fastai head (concat pooling, multiple fc layers instead of one). For this it uses the methods create_body and create_head.
The create_body method actually disassembles the model and puts it back together using a nn.Sequential module. This actually assumes a certain way the base model has been coded and works with the ResNets and some other archs. But it basically looses (some of?) the original models .forward() methods and replaces them with those the sequential module auto-creates. So if there is a lot of special stuff going on in the forwards and/or the model is not coded the way fastai assumes (i.e. nn.ReLU as a layer, not F.ReLU() in the forward), the resulting model after create_cnn will be different from the original you put in and therefore simply may not work anymore.
So I think if it is possible to get it to work with fastai, it will be by using the model “directly” and not through create_cnn. It might work by creating a databunch separately and then using the Learner itself ( learn = Learner(data, model, metrics) ) (Not tested, just a pointer…)
You can load the model with a costume number of outputs like this:
model = torch_models.inception_v3()
nf = data.c # number of output classes in your data
model.fc = nn.Linear(in_features=2048, out_features=nf) # replace fc layer
Then take a look at the code for create_cnn to find how to create the Learner.