my current CNN VGG-like model is deployed for inference on an edge device and runs on CPU - the performance is satisfactory but the model is a bit of a memory hog ( the spike during the TF graph instantiation) and also i’m trying to improve the accuracy…
to this end i’m exploring the RNN architecture (for OCR type app). And switching to PyTorch too.
But here are some immediate concerns i have:
is PyTorch as performant on edge devices (CPU only) as TF?
is RNN slower than CNN since there is an implicit sliding window of sorts while CNN segments/classifies the objects in one go?
The answer to both is “it depends”. You’d have to try it out for your particular situation. In general, PyTorch should work well on CPU. RNNs can be slower, but there’s stuff like QRNN that fix that problem.
right on, every time i’m thinking of posting a question, my immediate self answer would be why don’t you go and try it yourself - but as Rachel and Jeremy say (quoting form memory) - if you can’t find the answer in 30 mins ask it on the forum… and bingo - i was not aware of QRNN