Hi,
I trained a ResNet 18 with 6 classes and it works really great so far.
Now I would like to export it, so I can use it on a Nvidia Jetson.
I don’t want to install fastai on the Jetson, as I is quite complex as I found out.
So I exported the model to ONNX with
torch.onnx.export()
Then I try to convert it to a TRT engine right on the Jetson with onnx2trt, but I get the following error:
Input filename: opset11.onnx
ONNX IR version: 0.0.6
Opset version: 11
Producer name: pytorch
Producer version: 1.7
Domain:
Model version: 0
Doc string:
----------------------------------------------------------------
Parsing model
[2020-10-20 09:49:21 WARNING] [TRT]/home/nvidia/git/onnx-tensorrt/onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[2020-10-20 09:49:21 ERROR] (Unnamed Layer* 76) [Shuffle]: at most one dimension may be inferred
While parsing node number 75 [BatchNormalization -> "210"]:
ERROR: /home/nvidia/git/onnx-tensorrt/onnx2trt_utils.cpp:1498 In function scaleHelper:
[8] Assertion failed: dims.nbDims == 4 || dims.nbDims == 5
Do you have any suggestions?
Thanks!