There is no doubt the course and the teacher are great ( I am at the initial Lessons).
As I am new to this — does this course then help me use the current AI tools in the news now?
The analogy in my mind is should I learn Assembly today or start with a High Level language?
IMO, if you are new to deep learning, Part 1 of the course will help you understand the basic concepts in model training and deployment for vision, text and tabular data. The NLP chapter introduces Hugging Face ecosystem and their transformers framework which are very relevant to LLMs. Also, Part 2 dives deep into transformers along with other foundational concepts.
I am still doing Part 2 of the course, but as per my understanding LLM specific stuff like prompting, RAG, etc. are not covered. I started with the course first and am slowly starting to learn the concepts specific to LLMs now. I believe you can go about this the other way around as well (start with LLM specific concepts and go deeper into the fundamentals in this course), especially if you have some prior exposure to ML/DL.
Should I learn Assembly today or start with a High Level language.
If you are interested in technology then learn Assember. Given a few registers, storage transfers, jumps and arithmetic will help you understand computers without the constrains of any intellectual concepts.
You will also spend hours working out why it did not do what you thought you had coded.
Alternatively if you just want to solve problems then any high level language will do but you will often say - All I want to do is … but the compiler say No.
Similarly you can learn about the traditional technology such as clustering, learning and even GPU.
The alternative is using the Foundation models to solve problems.
The reality is that foundation model users still need to be subject matter experts in the subject area other than the foundation model otherwise lorem ipsum dolor sit amet, consectetuer adipiscing elit. Maecenas porttitor congue massa. Fusce posuere, magna sed pulvinar ultricies, purus lectus malesuada libero, sit amet commodo magna eros quis urna.Nunc viverra imperdiet enim. Fusce est. Vivamus a tellus.Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Proin pharetra nonummy pede. Mauris et orci.Aenean nec lorem. In porttitor. Donec laoreet nonummy augue.Suspendisse dui purus, scelerisque at, vulputate vitae, pretium mattis, nunc. Mauris eget neque at sem venenatis eleifend. Ut nonummy.
There is jabber text ahead of “The reality is that foundation model users still need to be subject matter experts in the subject area other than the foundation model otherwise …”
I finished Part 1 of the course and did some projects in the last couple of months. Now I have started doing Part 2 which dives into the foundational concepts of deep learning. I am also looking at prompting and RAG applications which are related to LLMs but are not covered in the course.
While working/using higher level algorithms/models/etc, I find myself again and again reflecting/thinking on the core concepts that I learnt in the old fastai courses.
“if you only have a hammer everything looks like a nail”
What I want is a toolbox full of tools, not just a hammer.
I still remember how simple random forests were able to outperform overly complicated deep learning algorithms in some problems.
Most of all, playing around with LLMs in claude.ai, huggingChat, chatGPT and other LLMs for various tasks & experiments helps you get clarity on prompting with LLMs.
After these, you can also refer official docs from popular open-source RAG frameworks or vector DBs (llamaindex/lancedb docs would be my pick), read through core concepts and run their end-to-end examples or tutorials.
Finally, you can modify these examples to fit other similar use cases and/or implement each conceptual step (eg: chunking, create & save vector embeddings, search & retrieve vector embeddings, reranking, response generation from LLM, etc.) in the RAG process from scratch.
P.S: While going through the videos and tutorials mentioned, please implement each of those on your own before moving on to the next one.
I have only looked at the basics of LCEL. I didn’t feel it was worth it to go deeper cuz the abstraction levels were a bit too much - didn’t help with my learning/understanding. The LCEL abstractions also make it difficult to have fine control over the different parts of an end-user application.