Hi ,I would like to know the most efficient way to convert a PyTorch model to either keras or tensorflow for deployment purposes
Convert PyTorch model weights into TensorFlow/Keras format
Fastai to browser pipeline
Convert PyTorch model weights into TensorFlow/Keras format
Take a look at ONNX Format. - https://github.com/onnx/tutorials
Currently TF support is experimental.
Beware that this is a fairly recent (6 Months) initiative, so it may not be fully functional. For example, Fast.ai uses some new methods like AdaptivePool layers that I think is not yet in ONNX export formats. So would not suggest you go this route for a production env.
Is there any workaround for that,Like have a final model with weights and then save those weights and load them up using tensorflow after defining a similar architecture like we do for Keras,cause PyTorch isnt very good when it comes to deployment.All I want to do is to deploy my model as a webapp
If you are not thinking of hundreds of concurrent requests, then I would say deploying a Flask App with PyTorch / FastAI works just fine. IMO, Keras or TF or PyTorch you will not see any difference in Performance if you deploy in your own Webapp. Most of the time will be in your input processing and DL is not going to help much there. For you to see real performance gains in TF, you will need to use TF Serving and embrace all the complexity that goes along with it.
Does anybody know if it is possible to convert models used in this course into tensorflow/keras format? I know there is thing like Microsoft MMdnn, but not sure if it can make transformation I need.
For example, is it possible to somehow convert model
resnext101_64 pretrained weights into Keras format?
Everything is possible but you may have to write some code to make it happen. There is ONNX as an interchange format. I haven’t tried it but it should be possible to convert your PyTorch model to ONNX, and the ONNX model to Keras. It’s also possible to do it by hand. The easiest thing to do is to write the model definition in Keras itself, then load the weights from the PyTorch model into the Keras model (you do need to transpose the weights when you do this).
Another post on same topic - Converting A PyTorch Model to Tensorflow or Keras for Production
@machinethink Right, at the end of the day, these are just tensors (NumPy arrays). But, if we pick DIY approach, do you know if there any high level overview of Keras/PyTorch internal format of model representation? Or do you think the only way to know is to manually investigate weights variables?
@ramesh Seen this question, but decided to ask again because of phrase “efficient way to convert into Keras”. I would be glad to find even an inefficient way to do it =) Talking about ONNX – do you have an experience with this tool/framework? Could you tell me please, does it allow to pick an arbitrary model (or, at least, one among some specific models) and convert from one format into another? It is something like (from logical point of view, of course):
import onnx from keras.models import load_model pytorch_model = '/path/to/pytorch/model' keras_output = '/path/to/converted/keras/model.hdf5' onnx.convert(pytorch_model, keras_output) model = load_model(keras_output) preds = model.predict(x)
Though as I can see from description, they are at the very beginning of the process of developing this open format conception.
Aiy vision kit
IMO, if there are similar posts, we should add to that thread and not create a new thread.
It’s a two step process:
- Export your PyTorch model to ONNX model format
- Import ONNX model to TF
https://github.com/onnx/tutorials Github has a bunch of tutorials to Export and Import ONNX format from most DL Frameworks (PyTorch, MXNet, CNTK, TF…)
I have played around with it and it works. But there are some gotchas, it needs to be a Static Graph. if you have dynamic graph in forward method, it would not work (yet). Also, some layers like AdadptivePool is not yet supported. They need to be changed to Regular MaxPool or AvgPool.
I manually convert between different packages on a regular basis. Different packages store their weights in different ways (for example, Keras stores 4 separate weight arrays for a batch norm layer whereas another package may store this as a single tensor with 4 elements).
The main thing to watch out for is different padding strategies. TF does padding in a different way than PyTorch / Caffe etc. So the predictions can be slightly different after converting.
@ramesh Sure, I guess this post could be deleted then, or probably somehow merged into the earlier one if possible.
@machinethink Ok, understood, thank you for response.
From what I experienced so far, productionizing PyTorch is not mature yet.
Whenever you use a custom model (not imported from the model zoo), i.e. for me always, onnx has trouble trace the non-standard layers. In my case
nn.ELU causes problems. And that is even part of
torch.nn.Modules. Known Issue. I have not found a workaround yet.
Using onnx directly
You need to import PyTorch to build the model, but you also need to
import onnx which leads to a segmentation fault. Known issues: Issue1, Issue2 - btw installing onnx using
--no-binary as suggested did not help me.
These are command line tools, so you need to write the model to the disk and use
mmconvert. These tools have trouble finding the custom code I used for building the model and I do not know how to resolve those dependencies.
I really like PyTorch, but being unable to export in an easy and transparent way is an issue for me. In
keras you just save and load the model in one line and you are done.
nn.Elu: I found this bit on the onnx tutorial on extending support for PyTorch, but haven’t tried this yet.
Hi all, I’m trying to converting the Unet model in fastai to other framework like ONNX. I’m new to this and just try
torch.onnx.export(model2, dummy_input, 'unet_fastai.onnx')
However this show up
RuntimeError: ONNX symbolic expected a constant value in the trace
I am continuing to solve the problem but very appreciate if someone can help me showing the way to deal with it.
Thank you in advance
Has anybody succeeded in trannsferring a Fastai model to Tensorflow or ONNX so far ?
same here with a resnet based multilabel classifier
I have justed test with resnet model and it works. I updated the fastai library to 1.0.50 (also pytorch) and it seems solve the problem
Did you first created the model using pytorch, loaded the weights from the model trained with fastai, and then exported? or did you export the model from the learner directly?
From the learner directly
Thanks Dhoa! I finally made it work. I had a problem when updating fastai as when using load_learner it threw an error because I exported the learner in a previous version of fastai where a class was called ImageItemList whereas in the new version is called ImageList. So I changed the name of ImageList to ImageItemList everywhere and then I could load the learner successfully and then export the model.