I recently joined the fast.ai course (part 1) - totally enjoying it and excited about it! Thanks to Jeremy and Rachel for putting this together.
In parallel with taking this course, I am also building a face recognizer based on the VGG16 codebase provided in the course material. Would very much appreciate if folks on the forum could answer a couple of questions I have:
I am currently doing things in batch mode. i.e., have training+validation datasets to train the classifier and save the model. Then use the saved model on a test dataset to recognize faces of people the model has already learned about in the training phase. In the real setting (experimental deployment), I would want the model to be trained to recognize new faces in realtime - unknown to the classifier when the model was created during training and also improve its performance as it sees more known faces. How can this be doing? Is is possible to keep updating the model while it is running?
If adapting VGG16 for face recognition seems like a bad idea, I would very much appreciate if people could provide feedback on that.
I am also looking at this paper from the creators of VGG16 about Deep Face Recognition. I’ll admit that I don’t understand some parts of the paper, but hoping to get a better understanding while working on this project :
Look forward to hearing back.