Hi everyone,
I was selected to make a project for my high school’s workshop at some event in Prague next week and I thought that It would be cool to make something interactive.
So the idea I have is to create a model that would recognize what is drawn in a picture. I am not really sure what is possible to create with a limited dataset, so I assumed that it would be good if the images were super small res like 28x28. So I would create a browser app in javascript where the person would draw a picture(I would give them my phone so they draw it on it using their fingers), the picture would be sent to my laptop and thrown into a conv net that would recognize it and then it would be randomly assigned either to the test set and train set and network would retrain. So this way it can be fun for the user both in the way, that their drawing would get recognized by conv net and also that it contributes to the models training.
In terms of the model, I was thinking maybe using the MNIST or Fashion MNIST pretrained models would be a good idea and then draw like 15 images before the event for each class to get something that works at least a bit.
So I wanted to ask if the 28x28 images are feasible and also if the 10-20 dataset size would be doable. Or maybe if some cool datasets / pretrained models exist for a task of this kind I would love to hear about that too.
Thanks