Running my classifier from a live camera feed

As a project, i have recently created a classifier that can recognize my dog from other ones (including ones from its own breed quite well).
I wish to implement this in a smart dog door that will only open when he is detected on a camera feed.
I have the classifier finished and i have it hosted on Render but im wondering how i can go about bridging the two? Or if that is the right way?
I want to use Python to take a frame from the camera, pass it to the model and await the reply. Upon detecting my dog, i want to perform an action (unlock the dog flap). If the model can be run locally, that is even better.
Please excuse my lack of knowledge as i am very new to this and this is my first post so still learning the ropes.

Any help on this matter would be greatly appreciated!

This sounds like a Raspberri Pi project :smiley: I’m not sure how fast it would be in terms of running inference on the raspi itself but it could do all the other steps if you run the model in the cloud (in Rendr?)

  1. Detect motion (through motion sensor or camera)
  2. Take snapshot
    3.Send to classifier, wait for response
  3. activate Servo to unlock the doggy flap if it’s your pooch.
1 Like

Yeah i was thinking that it could be done on an RPi!
The speed at which it runs isn’t a huge problem,
I basically have everything else set up other than your ‘send to classifier, wait for response’ step.
This is the only step i am unsure about. How might i use Python to upload the image to the classifier and wait for the response?

the simplest thing would probably be to have your rpi send the img via a POST request to your server running an api that would respond with the result.

there are multiple solutions for this as examples on the forum, search for
‘starlette’ or ‘flask’ or ‘productionizing’

one example here (2nd code box) is in the direction of what you are looking for.

2 Likes

Awesome! Thanks for the help. I will look into this now and see where i can get with it!