Hopefully this is not too off-topic for this class forum. If it is, I’m okay with deleting this thread. I would like to hear some opinions on this.
After reading some articles & lectures of the Deep Learning topic, I feel that in general DL (+ other ML techs) allows us to:
- Understand complex stuffs like sentences, pictures, audio clips, etc & extract the labels.
- Predict things based on collected statistics.
- Able to program mechanisms that are way too complex to model such as adaptive complex things, without coding the instructions manually. (self-driving, etc)
Number 1 allows us to automate the “sensing” processes which previously were not feasible as it is too complex & rigid. It is a good thing since it opens up many new possibilities.
Number 2 arguably has been done since many years ago. We just use way more parameters nowadays & ML DL might find new unnoticed correlations.
I would say number 3 is probably the most impactful one since the impact feels exponential. We can automate & do many things that were not feasible before. With it, we can program without having to lay out the exact sets of instructions (this feels linear).
Somehow, lots of online classes and articles about the achievements around DL & ML I have read were mostly about labelling things with accuracies better than humans (animals, MRI pics, etc). Also, Kaggle are mostly about classifying.
Why is it that we don’t see much movements on number 3? Other than self-driving cars, I am not aware of other complex workflows being automated (laundry? dust cleaning? security? real time coaching? resistance exoskeleton for sport trainings at home? etc)
If it turns out that it’s just me missing the information, please do let me know.