Loading UNET model into C++ using the new Torch Script feature of Pytorch 1.0

#1

I am trying the new feature from Pytorch (dev version) which enables loading a Pytorch model in C++ without any Python dependencies. I am using the tracing method:

import torch
import torchvision

# An instance of your model.
model = A UNET MODEL FROM FASTAI which has hooks as required by UNET

# An example input you would normally provide to your model's forward() method.
example = torch.rand(1, 3, 224, 224)

# Use torch.jit.trace to generate a torch.jit.ScriptModule via tracing.
traced_script_module = torch.jit.trace(model, example)

However, I got a value error:

ValueError: Modules that have hooks assigned can't be compiled

A brief look at the error trace:

   1118         if orig._backward_hooks or orig._forward_hooks or orig._forward_pre_hooks:
-> 1119             raise ValueError("Modules that have hooks assigned can't be compiled")
   1120 
   1121         for name, submodule in orig._modules.items():

I am using it for UNET MODEL from FASTAI. I was wondering if anyone else has tried it and would like to share their experiences if it worked. Also, does anyone if this new feature comes with a limitation of only working for very simple models?

4 Likes

(Jeremy Howard (Admin)) #2

I’m not sure about this - I’ve asked on twitter; hopefully a Pytorch guru will be able to help here. The issue is that Unet relies on a forward hook for the cross connections, and hooks aren’t currently supported by that new pytorch feature.

3 Likes

(Even Oldridge) #3

Will that be an issue for the library? Isn’t a lot of the functionality of v1 built around hooks?

2 Likes

#4

I was wondering if we could try this feature with other fastai models as well. Its a really cool feature so would be nice to find a workaround for it.

1 Like

(Katharina) #5

[quote=“shbkan, post:1, topic:23810”]
ValueError: Modules that have hooks assigned can't be compiled
[/quo
Hey,
has anyone allready tried to use torch.jit.trace() or something similar to a fastai model?
I am trying to wrap my ResNet of fastai. I am using the new fast.ai version 1 but I am getting some errors. So if somone was allready successful, I would like to know what I have to change to make it work.

0 Likes

(Katharina) #6

I just used learn.model.cpu() to convert it and at least I did not get a error… I will try to make it work in c++ now

1 Like

#7

Any update on this issue if someone else has tried it?

0 Likes

Converting UNET to Torchscript format for AWS Lambda
(Bob) #8

Here is a snippet of text I found in the following book that seems to provide the answer:

PyTorch Deep Learning Hands-On
By Sherin Thomas, Sudhanshu Passi
April 2019

PyTorch allows you to make a TorchScript IR through two methods. The easiest is by tracing, just like ONNX. You can pass the model (even a function) to torch.jit.trace with a dummy input. PyTorch runs the dummy input through the model/function and traces the operations while it runs the input.

The traced functions (PyTorch operations) then can be converted to the optimized IR, which is also called a static single assignment IR. Like an ONNX graph, instructions in this graph also have primitive operators that A TENsor library (ATen, the backend of PyTorch) would understand.

This is really easy but comes with a cost. Tracing-based inference has the basic problem ONNX had: it can’t handle the model structure changes that are dependent on the data, that is, an if / else condition check or a loop (sequence data). For handling such cases, PyTorch introduced scripting mode.

Scripting mode can be enabled by using the torch.jit.script decorator for normal functions and torch.jit.script_method for methods on the PyTorch model. By this decorator, the content inside a function/method will be directly converted to TorchScript. Another important thing to remember while using torch.jit.script_method for model classes is about the parent class. Normally, we inherit from torch.nn.Module , but for making TorchScript, we inherit from torch.jit.ScriptModule . This helps PyTorch to avoid using pure Python methods, which can’t be converted to TorchScript. Right now, TorchScript doesn’t support all Python features, but it has all the necessary features to support data-dependent tensor operations.

0 Likes

(Jeremy COCHOY) #9

Just to let you know, I managed to compiler and run a fastai DynamicUnet using torch.jit.
The model use hooks, but it seams the behavior is deterministic and induced by the shape of the dummy input. So I just commented pytorch’s code to force the algorithm to go on, and obtained a nice functional traced module.

It’s the line raise ValueError("Modules that have hooks assigned can't be compiled") that should be removed.

( The link to my SO question just in case https://stackoverflow.com/questions/56242857/how-can-i-force-torch-jit-trace-to-compule-my-module-by-ignoring-hooks )

Well, maybe it’s dirty, but at least you can compile it and run it in production :slight_smile:

1 Like