I noticed after trying inference with many models in pytorch that it’s slow when it comes to inference time, compared to tensor flow for example. i always thought that pytorch was faster, so that was a shock for me. but i barely get 0.1 s per image with an ssd and mobilenet backbone on a GTX950, when i managed to get better results with a faster rcnn and a resnet 101 on a much weaker gpu with TFOD API. why is it that pytorch is much slower? and is there a way to make inference go faster?
I am also facing this problem, with ResNet50, to predict an image it takes around 0.6s on RTX3090 with backends.cudnn.enabled = False, torch.no_grad(), in model.eval() mode. I am wondering is there any more tips for using torch to perform inference?
The time cost of the first-time model prediction takes more time. A warmup may be preferred before estimating the model inference time.