As a project, i have recently created a classifier that can recognize my dog from other ones (including ones from its own breed quite well).
I wish to implement this in a smart dog door that will only open when he is detected on a camera feed.
I have the classifier finished and i have it hosted on Render but im wondering how i can go about bridging the two? Or if that is the right way?
I want to use Python to take a frame from the camera, pass it to the model and await the reply. Upon detecting my dog, i want to perform an action (unlock the dog flap). If the model can be run locally, that is even better.
Please excuse my lack of knowledge as i am very new to this and this is my first post so still learning the ropes.
Any help on this matter would be greatly appreciated!
This sounds like a Raspberri Pi project I’m not sure how fast it would be in terms of running inference on the raspi itself but it could do all the other steps if you run the model in the cloud (in Rendr?)
Detect motion (through motion sensor or camera)
3.Send to classifier, wait for response
activate Servo to unlock the doggy flap if it’s your pooch.
Yeah i was thinking that it could be done on an RPi!
The speed at which it runs isn’t a huge problem,
I basically have everything else set up other than your ‘send to classifier, wait for response’ step.
This is the only step i am unsure about. How might i use Python to upload the image to the classifier and wait for the response?