Tracing AWD_LSTM model for android

I am attempting to use a new NLP model within the PyTorch android demo app Demo App Git however I am struggling to serialize the model so that it works with Android.

The demonstration given by PyTorch is as follows for a Resnet model:

model = torchvision.models.resnet18(pretrained=True)
model.eval()
example = torch.rand(1, 3, 224, 224)
traced_script_module = torch.jit.trace(model, example)
traced_script_module.save("app/src/main/assets/model.pt")

However I am not sure what to use for the ‘example’ input with my NLP model.

The model that I am using from the IMDB tutorial and the python is linked here: model

2 Likes

A bit of an update. Looking at the source code of the Fastai functions, I worked out the shape of the sample input and managed to move a bit further along.

First I tried example = ((torch.randint(1, 10, (5,)), torch.tensor([0])),)

Which allowed the tracer to get into the forward function (the reason I had to use a tuple with an empty slot was the trace seemed to pre-unpack the tuple before passing it to the forward function which in turn was looking for a tuple).

However, this left me with the following error:
/opt/conda/envs/fastai/lib/python3.7/site-packages/fastai/text/learner.py in forward(self, input)
259
260 def forward(self, input:LongTensor)->Tuple[List[Tensor],List[Tensor],Tensor]:
–> 261 bs,sl = input.size()
262 self.reset()
263 raw_outputs,outputs,masks = [],[],[]

AttributeError: ‘tuple’ object has no attribute ‘size’

My assumption being, it wanted the input to actually be a tensor, so that it could call input.size()

I then tried a few different options and finally got the following to me accepted by both the PyTorch tracer and the forward function:

example = (torch.tensor([[1,2,3,4,5], [1,2,3,4,5]],device='cuda'),)

However, it spit out the following error:

RuntimeError: Only tensors or tuples of tensors can be output from traced functions (getOutput at /opt/conda/conda-bld/pytorch_1579022060824/work/torch/csrc/jit/tracer.cpp:212)

Which I take it as meaning that PyTorch can’t properly use the output of this function and I don’t really know if I can fix that. Therefore I am going to try a different model altogether and assume the FastAi NLP model can’t be traced at the moment.

Unless anyone has any other ideas?

All fastai models are just PyTorch, fastai streamlines training them among other things. What you should do is ensure your input is what the model is actually expecting.

IE: follow the tutorial and grab a raw batch of data to pass in (via the .one_batch() method)

I’m aware they are just PyTorch. To begin with I started by grabbing the raw data via .one_batch() and passing that in but it didn’t work and hence I started trying the above.

(As an aside, I’m relatively new to this so it did take me a little while to work out to do what you described above haha)