My main questions are:
a. Is it correct to export the learn.model or any other extension is the trained model to get saved.
b. If the size of the dummy_input is not correct, what is the right size for it?
Please note that as specified in this post (Using a fast.ai model in production), we can’t export Fast.AI models right now to ONNX because of AdaptiveMaxPool and AdaptiveAvgPool Layers. This was an issue with PyTorch 0.3, I have not tested with PyTorch 0.4
Then is it correct to say that neither the Pytorch 0.4 is working to export Fast.AI to ONNX? The main question of this topic is the error I get at the very end when I use _torch.onnx.export and all this is running on a Pytorch 0.4.
My overall intention is to create a TensorRT representation of my Fast.AI models. Other than this puzzling ONNX convertion, is there any other solution that you suggest? To my understanding the solutions you suggested earlier are:
don’t use a pre-trained Fast.AI model, but train and export your own one, like the LeNet example you posted.
and/or wait until the issue with these adaptive layers get resolved.
I would add these:
Find another conversion to represent a Fast.AI model into TensorRT. Any good suggestion on this?
or, modify AdaptiveMax/AvgPool with GlobalMaxPool. If this is a solution, how can I do it? Btw, there is no globalpool-like class in torch.nn.modules.pooling.
I found this post that probably can help you to solve your problem:
“Pytorch-onnx currently doesn’t support AdaptivePooling but fast.ai is using that for training on different input image sizes (a way to prevent overfitting). But if we only care about one size, let’s say 299, we have to replace the AdaptivePooling by supported Pooling layer with fixed size…”
I don’t think he is using the latest version fo fastai(v1/ Pytorch1.0), due to the date of his blog.
This is the best one by far, I tried using this code with ResNets and MobileNets, works great! The only change it does to fastai is replace fastai.layers.Flatten with torch.nn.Flatten. Credit to @davidpfahler
I’ve also gotten @davidpfahler’s method described in @rsomani95’s post to work. If you see the cannot resolve operator 'Shape' with opsets: ai.onnx v9 error, that means you didn’t correctly re-write the head of the model to replace Fast.ai’s custom Flatten layer with the PyTorch one. (See @davidpfahler’s notebook for details.)