I’m interested in a very particular topic that’s just for fun, but could be very useful in a number of fields. I’d like to use part of the vgg model and repurpose it for use on an unsupervised task.
Let’s say I want to pop() off a couple of layers, put a couple of layers on top and then train on unlabelled data. Do we have an error criterion for unsupervised learning in keras? Maybe something involving minimizing entropy?
There’s a lot of interesting ways of doing unsupervised learning, and they’re rather problem dependent. Could you please tell us as much as you can about the application you’re considering, and exactly what data (in what quantity) you have (or can get)?
It’s going to imaging a virtual environment, in this case a 2D game with cartoon-like features. Assume we’re starting with a few thousand images that were sampled during human gameplay at 10 times a second. As the bot plays the game it will be generating more images to train on, and will have to learn which features are most important for understanding the environment.
I’m hoping that as a starting point it will use an unsupervised method to get some basic understanding of the environment.
I would like for subsequent learning to be based off of the actual interaction between the agent and the environment, which is a much more complex task, but one that I am also interested in solving. I imagine using a method similar to what you, Jeremy, discussed in class, panning the image in a semi-random way like an eye would.
I know that this is a big task, but one I’m interested in chipping away at for a long time.