I want to make a real-time sign language detection system where a user will be doing these gestures in front of the camera and get the corresponding letters on the screen. And then form the words and sentences.
For now, I am using this American sign language dataset available on Kaggle. Here is the dataset link ASL Alphabet | Kaggle
I have trained my model but I don’t have any idea about making such a system.
Can anybody point out me to any kind of resources like Github repo, articles, tools, frameworks anything or if you guys have done something like this before please write your thoughts and implementation ways on this post