I got this working for images using the deployment walkthrough @matt.mcclean describes and it worked like a charm! Anyhow, I ran into a problem when trying to do the exact same thing with a tabular learner.
trace_input = torch.ones(1,3,224,224).cuda()
jit_model = torch.jit.trace(learn.model.float(), trace_input)
My tabular learner data is simply three float values and returns one float value.
So I tried changing this line:
trace_input = torch.ones(1,3,224,224).cuda()
to this:
trace_input = torch.ones(1,3).cuda()
and got this back:
TypeError Traceback (most recent call last)
<ipython-input-27-cb886bac3aea> in <module>()
1 trace_input = torch.ones(1,3).cuda()
----> 2 jit_model = torch.jit.trace(learn.model.float(), trace_input)
3 model_file='resnet50_view_classification_83_classes_jit.pth'
4 output_path = str(path/f'models/{model_file}')
5 torch.jit.save(jit_model, output_path)
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/jit/__init__.py in trace(func, example_inputs, optimize, check_trace, check_inputs, check_tolerance, _force_outplace)
634 var_lookup_fn = _create_interpreter_name_lookup_fn(0)
635 module._create_method_from_trace('forward', func, example_inputs,
--> 636 var_lookup_fn, _force_outplace)
637
638 # Check the trace against new traces created from user-specified inputs
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
485 hook(self, input)
486 if torch._C._get_tracing_state():
--> 487 result = self._slow_forward(*input, **kwargs)
488 else:
489 result = self.forward(*input, **kwargs)
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py in _slow_forward(self, *input, **kwargs)
475 tracing_state._traced_module_stack.append(self)
476 try:
--> 477 result = self.forward(*input, **kwargs)
478 finally:
479 tracing_state.pop_scope()
TypeError: forward() missing 1 required positional argument: 'x_cont'
I hope to share all the cool stuff I was able to do with medical data, and fastai deployed on AWS lambdas soon!
Thanks,
Bob