I’m curious as to whether anyone has experience using the NVidia Jetson platform. I’d like to eventually be able to build some sort of handheld device capable of running (but not training) CNN models in near-ish realtime. Is there any sense as to whether the Jetson platform would be able to do this?
in theory the answer is yes - Nvidia folks have been demoing the TX1 inference capabilities for edge type devices (e.g. surveillance video cameras) for a while - i for one saw the object segmentation demo last year at the GPU conference - also talked to the guys there. Have you signed to their dev program? i think they offer some SDK and simulators which you can try before you commit… on the other hand for inference - i stopped following as currently we plan to deploy the recognition sw at the edge in Windows 10 (long story) and Keras
I have signed on to the dev program, yes. I bit the bullet this morning, and bought one of the TX1 kits - they were on for $199, which is within my budget for testing out a new platform.
that’s so cool! - i’ve been toying with an idea for a TX1 ROS-based robot to detect deers and scare them away from our backyard. Good luck and keep us posted!