Sorry, a bit of off-topic, just wanted to share a story and some thoughts
I think those of you who has a baby would agree that it’s extremely interesting to see how the baby learns things. E.g. did you know that babies are born with “hardwired” reflexes like automatic walking (that fade away over time), or that babies initially don’t have the concept of self and don’t recognize themselves in the mirror up until ~1.5 years
So, at around 4-5 months our baby learned to play with toy cars - he was making a special sound (“broom-broom”) and was trying to push the car. The interesting part is that he was recognizing cars by its wheels, and he’d count anything that has something resembling a wheel as a car (so there were a lot of false-positives!)
My thinking was - wow, that seems pretty close to image recognition in deep nets! I.e. recognizing a pattern in the image but not the concept of the car.
Some time later he learned a special sound for swinging (“cuch-cuch”) and then at around 7-8 months, I witnessed something remarkable: we bought a new book that had a very sketchy small image of a seesaw (just made of several lines), which I would not even recognize as a seesaw at first. He never saw images of a seesaw before, only saw them in real on the playground. Yet, he pointed on the sketchy image and said “cuch-cuch”. I was totally blown away by that. In terms of machine learning, this seesaw image was from different data distibution, no chance that a deep net trained on seesaw photos would ever recognize it. Which means that the baby at such an early stage somehow learned the concept of a seesaw and it’s not just image pattern matching anymore.
Perhaps that’s the direction that deep learning should explore more (though, I’m not sure it’s possible with simple supervised learning).
Also, couple thoughts on “catastrofic forgetting” Babies forget a lot!
In my observations, throughout the first year our baby was “obsessed” with some specific things that changed over time (birds, stones, trees, flowers, water, etc.). He found those everywhere and paid a lot of attention to them. He memorized dozen of bird types for example. But after couple months this interest and knowledge just washes away
I suspect that through series of “obsessions” (or “overfittings”? ) and forgettings baby still learns something more abstract and general, like that something can fly, something can fall, etc. and it’s OK to unlearn specifics while retaining higher-order knowledge. Again, which does not seem to happen in deep learning models (at least simple ones?).
I’d love to hear your thoughts or stories or links to interesting research in the area of abstract concepts learning in machine learning.