Just in case anyone missed it.
The DNN inference module in OpenCV 4.2.0 has been upgraded to run on Nvidia GPU’s. As fastai pytorch models can be exported to onnx they can now be directly imported into OpenCV and run on the GPU. I have run a quick test in this notebook, exporting a xresnet50_deep network trained on the pets data set. The results look comparable and the inference for a 224x224 image on a laptop RTX 2080 is ~5ms in python.
I know this will most likely only be of interest to C++ developers writing applications which perform local inference, but I am guessing their must be a few on here.