PyTorch on edge device for RNN inference in production?

my current CNN VGG-like model is deployed for inference on an edge device and runs on CPU - the performance is satisfactory but the model is a bit of a memory hog ( the spike during the TF graph instantiation) and also i’m trying to improve the accuracy…
to this end i’m exploring the RNN architecture (for OCR type app). And switching to PyTorch too.
But here are some immediate concerns i have:

  1. is PyTorch as performant on edge devices (CPU only) as TF?
  2. is RNN slower than CNN since there is an implicit sliding window of sorts while CNN segments/classifies the objects in one go?

thoughts?

1 Like

The answer to both is “it depends”. You’d have to try it out for your particular situation. In general, PyTorch should work well on CPU. RNNs can be slower, but there’s stuff like QRNN that fix that problem.

2 Likes

right on, every time i’m thinking of posting a question, my immediate self answer would be why don’t you go and try it yourself - but as Rachel and Jeremy say (quoting form memory) - if you can’t find the answer in 30 mins ask it on the forum… and bingo - i was not aware of QRNN

3 Likes