2 months ago I decided to go deep into A.I and deep learning and I’m really enjoying so far; I began with coursera, then I found fast.ai, loved fast.ai philosophy, so I’m sticking to fast.ai all the way; and I’m loving so far part 1! at the same time I read all I find online about deep learning and a.i including youtube vids, interviews, conferences, etc, etc
Now, from time to time I bump into articles like this one:
That one is from yesterday, january 5, 2019
Articles like that one appear every few weeks somewhere. I would love to hear from some of the fast.ai experienced community about: what is your opinion about these kinds of warnings, why do they appear relatively often, and what’s your take on them? are they justified? are they exaggerated? what’s triggering them? etc
thank you and looking forward to learning more and more with fast.ai
Jeremy Howard actually had a video on this last year:
“Deep Learning is Overhyped…is Overhyped”
I fully agreed at the time I watched it but it has been a while.
That’s a brilliant video by Jeremy, thank you for sharing,
it would be very interesting to know if one year later, today, if Jeremy would have new insights/info to update the content of that video or if it pretty much according to him the very same would apply 1 year later (for sure most of it applies exactly the same 1 year later, just wondering if some bits of it would benefit from any update a year later, for example regarding the slide of what works now commercially, what works now in research and what is not yet working well (RL, adversarial, anomaly))
What a badly written article. AI can now solve many specific everyday tasks, which was out of its reach and out of other methods reachs too.
Maybe the auther has the gartner hype cycle in mind - i personally think that task specifc AI is through the hypefasen and that the AI-winter has instilled a certain prudence.
Generalized AI is another beast which will probably be overhype, because so many are thrilled about the idea. Personally i do not care about generalized AI.
That’s a good point Kaspar, and Jeremy talks about that at the end of his video, regarding that he doesn’t really care much if A.I can behave like the brain, etc
so that’s a good point, when we focus on task specific A.I, the results keep improving; on the other hand, for those that choose to focus on generalized A.I, it’s a different story; but as you say, a lot of A.I practitioners don’t care much about generalized A.I in practical terms (although they may of course find the topic inspiring and fascinating)
I think a lot of those “deep learning is overhyped” comments are from people who are expecting general AI. That’s definitely a good question how close it’ll get. However for speech & vision processing, the progress we’ve made in the last 5 years is nothing short of incredible. A modern CNN simply blows everything else out of the water. I recently worked for a german auto supplier on ADAS systems. As Jeremy describes, they used very elaborate hand crafted features, a system which took a whole team years to make. In just a few months I was able to make a far simpler CNN that left the classical system for dead in terms of performance. We’re only just starting to see uptake in industry & chips powerful enough to run these algorithms.
Also the market for this kind of stuff is huge and it will be a long time to uptake. Just look at databases/simple business apps for example. This is simple, old stuff yet I’d say we’re not even close to feeling the full impact of these technologies. The amount of companies & government departments out there still working with huge offices full of paper pushers & spreadsheet mongers is crazy.