How has your journey been so far, learners?


(Thalanayar Muthukumar) #285

Hi - I had started this course about 2 months ago but then did not continue much. I am back again and initially want to focus on identifying objects in custom datasets, an example being the ability to identify networking equipment from different manufacturers.


(Thalanayar Muthukumar) #286

Welcome @richardreeze and @Vineeth. I hope you enjoy this group and learn quickly. Happy learning


(Thalanayar Muthukumar) #287

Welcome @teeyare. Happy learning


#288

Thanks! :grin:


(Rubén) #289

Hey everyone!

I’m Rubén, from Spain. Have been working for around 6 years as a software engineer, currently trying to become an ML one - harder than expected! Hoping this course will help me with that : )

See you around!


(Dane Balia) #290

Aloha

Dane from South Africa. 10 years Systems Administration and another 10 as Software Developer. Looking to have fun and learn.

Chow!
D


(Pradeep Kumar Thiagu) #291

Pradeep from India. Have few years experience as Software Engineer and IT analyst. Is there any study group for part 1 so that I can join and learn.


(Stefan Langenbach) #292

Hello everybody,

Stefan from Germany here. Never formally learned to code or to apply machine learning. I enjoy self-learning and MOOCs and have done several since 2015, e.g over at edx, Udacity, DataCamp, etc… Feels like fast.ai is something much more practical.

Looking forward to learn and use that knowledge to eventually transition to some kind of ML-Engineer role.


(Murali Mohana Krishna) #293

Hello everybody,

I am Murali from India. I feel this course is a perfect complementary after Andrew NG’s deep learning specialization.

I started a series ‘Fast.AI Deep Learnings’ where I would like to practically implement and share my experiences about each topic.

Here is the first post

Please provide your feedback :slight_smile:


(ashis) #294

Hi Friends,
Ashis from India . Fastdotai inspired me to start my technical blog. Please check my first of the many to come posts :- https://medium.com/@GeneAshis/fast-ai-season-1-episode-2-1-e9cc80d81a9d .
Open to feedback and reviews.
Best,
Ashis


(Dana Ludwig) #295

@anandsaha and @jeremy and @rachel - I’m glad you asked. It has changed my life, which is not easy at 67 years old. Here is a perhaps overly personal post that I was thinking of posting to my blog. Please let me know if there is anything that you would rather not share:

How should I become an expert in Deep Learning
– about the fast.ai course

What is fast.ai?

Fast.ai is a research organization, self-funded by Jeremy Howard and Rachel Thomas, with two goals:

  • Create free online classes to show the world that anyone of us can learn the revolutionary new technology, variously called AI, Machine Learning, Deep Learning.

  • Conduct research to create the software to make that learning easier for everyone and on any budget.

In short, they want to “Making neural nets uncool again”. That is, anybody can do it, not just the cool kids at Google and Facebook.

Why Deep Learning?

Humans tend to think that what they have created is special and usually cool. They like that they can go to the local grocery store to pick the food that other animals must struggle for just to stay alive. They call themselves an intelligent form of life, and they spent a lot of time looking to see if there is any other intelligent life in the universe. They constantly struggle to explore and discover the boundaries of their world and existence, both outside their bodies and inside their minds. Just because that’s what we do.

The evolution of human technology in the last few 1000’s of years has been mostly gradual, but there have been individual discrete events along the way that have led to huge bursts of progress, for instance:

  • fire
  • farming
  • Bronze
  • Iron
  • Steel
  • The printing press
  • The general purpose computing machine
  • The transistor
  • The microcomputer
  • The internet
  • Deep Learning

Jeremy, Rachel and many others place “Deep Learning” as one of the big technical breakthroughs for mankind. Specifically, Jeremy has said that Deep Learning “changes everything” and it is as important or more important than the internet.

For me, the reason is basic. Before Deep Learning, computers were stupid. Since Deep learning, Computers are no longer stupid.

Why does that matter? In the big picture, maybe it doesn’t matter. We all still live and die.

But for me it is more personal. When I saw my first computer around 1971, and understood that it could do “anything”, including make decisions, learn and modify itself, my mind was changed forever. I thought, as many people did, that this machine could go beyond the human mind and teach us things that we hadn’t even imagined. I was hooked.

But I slowly came to realize that these dreams were not going to happen, at least not in my lifetime. Computers were stupid, because they could only do what we showed them how to do, in excruciating detail, and with much effort. The behavior of the computer was no more enlightened than the person who programmed it, and usually much less enlightened.

As I watched for emerging value from the ever-advancing computers, I found it disappointing. They started out keeping lists of numbers for banks, and they ended up playing movies and video games. That’s it. It took me my whole career to realize that computers were never going to impress us.

But not anymore. Everything you thought computers could never do, well, they can do now. Judgement, context, nuance, everything. Even LSD-induced hallucinations.

But how am I going to learn to program these dream machines?

I’ve spent just over a year trying to find the answer to that question. I took Andrew Ng’s Machine Learning course, and it was great. But when I read any technical papers or attended any conferences, I realized I knew nothing. Worse yet, the papers are almost impenetrable to the typical professional software developer. It seems the field was just too new, and the experts were just too busy to break it down for the novices like me. There were tons of tutorials, but they all seemed to reference some obscure but critical concept. The Wiki pages were the worst.

Then came Jeremy and Rachel’s courses. Now, everything about learning Deep Learning is within reach. Finish the two fast.ai courses, learn everything that is presented, and you will be a competent if not expert Deep Learning practitioner.

But why are the Fastai Courses so Special?

Jeremy teaches most of the two Deep Learning classes, and Rachel teaches the Linear Algebra class, and creates the Wiki pages of references. Both of them create blogs about the latest amazing development in Deep Learning. Rachel is a patient, brilliant communicator who is also a mathematician and “quant” as she says, and who cares deeply about social justice and about making sure that you understand everything, no matter who you are. Her podcast interview (on TWIML-AI) was what brought me to fast.ai. But I know far more about Jeremy because he presents most of the Deep Learning course material.

Jeremy is simply and indisputably a genius in both technology and communication. Think of the physicist Richard Feynman. Feynman always was the smartest guy in the room, perhaps even when Albert Einstein attended one of his presentations. But beyond that, when Feynman figured something out, he couldn’t wait to tell somebody. He got a tremendous joy from communicating his exciting discoveries. He even wrote a book, “The Pleasure of Finding Things Out”. Some people say that he was the only one who understood Quantum Mechanics. But in this 1 hour video, without any math, he will show you a simple experiment with light and two slits that is the essence of why we need Quantum Mechanics to explain our world, and then adds that nobody will ever understand it, because it is different than everything you have ever experienced.

http://www.cornell.edu/video/richard-feynman-messenger-lecture-6-probability-uncertainty-quantum-mechanical-view-nature

For me, Jeremy demonstrated the same kind of genius in explaining word2vec.

Previously, the best explanation of Word2Vec was in the blog post by the brilliant communicator, Chris Olah:

http://colah.github.io/posts/2014-07-NLP-RNNs-Representations/

Then in the fast.ai course, I heard Jeremy explain word2vec, how it works, why it is important, and how we can do even better, in just 4 minutes. One amazing thing about this piece is that it wasn’t part of his planned lesson but was in response to a question from his student about “skipgrams”. His answer starts at minute 31:29 through 35:44

https://www.youtube.com/watch?v=sHcLkfRrgoQ&feature=youtu.be&t=1890

There just aren’t many people in the world who can do this, and this gift is the key to how the rest of us mortals can now learn what the experts know.

What are the other reasons you need this fast.ai course?

  • The pre-requisite is only one year of programming experience. That is stated on the web site. If you don’t have a year, but you think you can pick up programming fairly quickly, I would recommend a tutorial on python and another tutorial on PyTorch. You don’t need Calculus; he will tell you the small handful of calculus rules you should know. If you don’t have any Linear Algebra in your memory, Rachel has a course on fast.ai that will bring you up to speed.

  • Everything you learn will be state-of-the-art. On the first day, you build a state-of-the-art visual recognition model with 3 lines of code and 5 minutes of machine learning time. The reason you only need 3 lines of code is that Jeremy and Rachel have built a Keras-like Deep Learning library that implements all of today’s best performing models and gives you those three lines of code to adapt them to your problem with your data. You spend the rest of the lesson learning how to adapt that model to any other visual pattern classification problem, and exactly how to transfer the best training from existing models into your model, and how you can train the part of the model that is specific to your problem with very few additional training labels. This technique of adapting someone else’s Deep Learning model is called Transfer Learning, and is described further below.

  • You will learn all the secrets of the masters needed to make your model perform the best. In the subsequent classes, he breaks down the models underlying the three lines of code, and shows you all the unpublished tricks that make the model perform at the state of the art. Jeremy was the leader of the Kaggle organization that hosts the Deep Learning challenges, as well as the leading competitor. He studies many of the models from the first-place leaders and published journal papers and incorporates those tricks into the models that he gives you, and then teach you what they do and why they work.

  • He will explain everything. No black boxes. He will show you all the code inside those three lines, and when he shows you the code, he explains every single function and every parameter so there is no doubt about what it does and why. Several times he actually used an Excel spreadsheet to explain a critical state-of-the-art gradient descent optimizations (Adam). The spreadsheet was not a metaphor; it was the actual, executable code that implemented this leading algorithm! Then he showed, on the spreadsheet, how to change Adam to make an even better optimizer (AdamW) that will likely displace Adam over time. The importance of “explaining everything” in Deep Learning can’t be over-emphasized. Some people say, “Just give me the program; I want to use it and not do research on Deep Learning.” But as amazing as it is, Deep Learning is still in its infancy. We are in the early phase of an explosion of scientific progress, and new breakthroughs come out every month. Lots of problems are unsolved. For instance, in my field of medicine, where decisions determine life or death, it is critical that a computer can explain its predictions. This can be done, but the solutions are not yet mature. Most researchers are focused on getting more accurate predictions. So, when you bring this technology to your field, you will need to solve some problems on your own. And for that, you need to understand the details of how the current methods work.

  • You learn all the concepts stripped of their esoteric jargon. Jeremy and Rachel hate the technical jargon of Deep Learning. Jeremy hates mathematical formulas with Greek and Latin letters. Jeremy would rather focus on the code that implements the formulas, because that is what makes it real, and that is what you will have to write. Yet Jeremy and Rachel know you need to survive in a jargon-filled world, so he will also tell you the jargon-name for the technique. Then he will say: “See? That’s all it is! It didn’t need a new term for something that simple”

  • You learn to use Deep Learning on a Shallow budget. When Jeremy and Rachel say they want to make neural nets “uncool again”, they mean they want to let all of us contribute to this field, and not just the cool mega rich corporations like Google and Facebook. In service to that goal, they focus their research and their course on finding ways of removing the cost barriers that separate Google and Facebook from the rest of us. The two factors that contribute to the high cost of doing Deep Learning are: (1) The cost of the massive arrays of compute servers and (2) The cost of paying humans to build many thousands of labeled data sets to serve as the “gold standard” with which to train the neural nets. Both of these costs can be minimized through using two key technologies that Jeremy and Rachel are advancing in their own research:

  • Transfer Learning. If you can use someone else’s freely available trained network, you can apply that network to other applications. For instance, the lower layers of state-of-art models trained on Stanford’s ImageNet can be used with just a small number of additional samples, and a small amount of compute for fine-tune training, to create a wide-range of special purpose image oriented applications. The use of transfer learning is the focus of Lesson #1 in fast.ai
  • Unsupervised Learning. Recent research has shown clearly that for many tasks, your network can do most of the necessary learning without the need to pay people to create gold standard labels. Jeremy believes that the best way to do this is to use the same algorithms for supervised deep learning, but to use a variety of clever tricks to have the computer generate its own training labels. Their most prominent example so far seems to be the Universal Language Model Fine-tuning for Text Classification (ULMFiT): http://nlp.fast.ai/ This model works something like “word2vec” but takes the concept much further to learn the inherent knowledge and structure of entire text documents. They have published a paper showing that this model, trained with no human labels, can be trained to perform a wide variety of NLP tasks with just a few extra labeled samples per task. In Lesson 10 from the advanced class, you learn exactly how this model was built and how to use it in your own projects, on a very small budget.
  • Jeremy will never waste your time. These courses are the most information-dense I have ever seen. Nothing doesn’t matter. For me, when I attend a course, I only write down the facts that are useful, but that I didn’t already know. In this class, I write down almost everything.
  • You keep the state-of-the-art code and use it in your own work. Probably for the next few years. When you leave the class, you are at or just beyond the published state-of-the-art, with high performance, industrial strength code, ready to change your world. It’s an open question of how long this will be the best code, in a rapidly changing world. Jeremy, Rachel and their team re-wrote the entire code base between their 2017 and 2018 classes (from TensorFlow to PyTorch). They may or may not keep doing that. But today, you will be riding the crest of the wave.
  • You will use Pytorch. Pytorch is an elegant Deep Learning framework like TensorFlow, only better.
  • It is easier to use AND more powerful than Tensorflow because models are built dynamically from your code, and not compiled into another abstraction created behind the scenes
  • Because it is executing your code, and not the abstraction, you can understand your debugger traceback because you will see your code and where it failed.
  • It’s faster than TensorFlow
  • It’s backed by Facebook, and this organization has the financial resources and longevity to support it. This fact doesn’t make it better than TensorFlow, but it does put PyTorch in the same league with Tensorflow, of frameworks that are not going to go away
  • its popularity is growing

You can do this, and you will be able to perform at a very high level when you are done.

Why do Jeremy and Rachel do this?

While enjoying my amazing learning and enjoyment in this class, my only remaining question was why they do this? What is their business model after investing so much energy in building something and then giving it away? What’s their angle?

In these two interview, Jeremy laid it out pretty clearly:

https://www.smh.com.au/technology/i-wasnt-interested-in-just-following-the-rules-data-scientist-jeremy-howard-and-the-next-internet-20160419-go9rps.html

https://www.kdnuggets.com/2017/01/exclusive-interview-jeremy-howard-deep-learning-kaggle-data-science.html

After creating two successful companies, he was a wealthy man in his 30’s, and was able to retire. But he didn’t find that satisfying.

“Having this aspirational goal to achieve everything is not really very significant”

When he moved to San Francisco and discovered he could improve on medical decision makers, he founded Enlitic with a significant amount of investor capital. But even that was not completely fulfilling:

Leaving Enlitic was much harder. … it now seems to me that externally funded start-ups are not a good choice for solving problems that still need a lot of fundamental research to be done. There is too much pressure from investors and staff who wish to see their equity value rise as quickly and as much as possible. Having said that, I’m not sure that academia is much better, which is why I’ve started a self-funded research institute, fast.ai, together with Rachel Thomas.

I think I get it now. When some people have accomplished multiple tangible successes, and they have enough money to never work again and do anything they want, they often decide that they want to make the world a better place for everyone. Bill Gates discovered this, and Jeremy and Rachel seem to have discovered this sooner in their lives. Deep Learning changes everything, and with Jeremy and Rachel’s help, it will change everything for everyone.


(Jeremy Howard) #296

That’s wonderful! Thank you for sharing :slight_smile: It means a lot to read this, and of course you’re welcome to post it wherever you like.


(Eddie) #297

Hi, there! My name is Eddie and I’m from Phoenix, AZ.

I currently work as an RN Manager for a very busy ICU in the city. I have always had a knack and passion for working with computers and have played around making programs and apps for some time, including a couple, for personal use, medical apps to help with the workflow at my job.

I don’t remember exactly how I came across AI and its usefulness, but now that I have discovered it and have really taken a deep dive in to the subject, I feel that this technology is going to change the world like we have never seen. It already is. There is a huge potential for the benefits of AI, specifically these neural nets, and I have several ideas on ways in which they could be beneficial to the healthcare system by reducing costs and improving patient outcomes.

I am excited the more that I learn and as I create some basic sample test projects and I think there are a lot of great things coming down the road. I thank you for this course as it will help me understand things more and to be able to apply them towards the greater causes that I see!


(Dana Ludwig) #298

Thank you, Jeremy; Here is where it went.

ps - thank you for introducing me to the term “bike-shedding” in your post about coding style conventions - I’ve actually seen that in my meetings, but didn’t know it was a “thing”!


(Dana) #299

Hey all,
My name is Dana, and I’m a forty year old nontraditional student in an online software development BS program, looking to finally start a “real” career. I’ve daydreamed about better computer input techniques since I was a teen, and I recently realized that computers had probably gotten smart enough that written shorthand, the hand writing systems that secretaries, court reporters, journalists, and such used from the at least the late 1800’s up until the early 1980’s, could be used as a fast computer input method. Certainly faster than soft keyboards on phones and tablets, and possibly or even probably faster than a standard computer keyboard! I’ve been researching shorthand, AI, and whatever else I could think of that might be helpful in making this daydream real, leading me here. After watching the first lesson I’m wondering if I should stop trying to learn shorthand and get a machine to master it first so I can learn from it! The only thing I’m sure of is it’s time to get sorting cats and dogs.


(Dana) #300

@danaludwig Your post on medium is what brought me here! I’ve always said I’ve never met another man named Dana who wasn’t a jerk. Maybe at last I’ve found one. :wink:


(Krisztian Kovacs) #301

Hi Everyone!

I’m Kris. Super excited to dive into the course. I have looked at it before, but wasn’t able to give it a serious go until now.

I was considering to just wait until the next iteration (it starts in October, right?), but I was impatient, and the course seemed so good that I thought I would just jump in. Does anybody (@jeremy) know if the next round is going to be substantially different?

So far I watched the image classification lectures, and thought I would apply what I learned to some automatically downloaded google images (of 10 different Harry Potter characters). Here is a blog post about it and the Jupyter notebook if someone is interested.


(Dana Ludwig) #302

@bluepapaya, Glad you made it! I’m starting session 9 (part 2, second session) and it just gets better and better! At the beginning, Jeremy warned us that we are big kids now and we are going to have to figure things out on our own, with many failures for each success. Then he proceeded to spoon-feed us on advanced debugging techniques and state-of-art code editors. The explanations are even more detailed than Part 1! For me this is just fantastic; it seems like there is nothing I can’t do with this knowledge. At the start of session 9, he even summarized the skills we should have mastered by now. This is by-far my most efficient path to learning, and I’m going to stay focused on the class until I complete it. It is like a “moment in history” that we have to grab while it is still available and timely.


(Semihcan) #303

Hi, same here, I have been wanting to do it for a long time but just starting. In fact, I am just started the first lecture today even though I worry that I am too late to the party and/or a new version of the course will be released soon. Anyway, you are ahead of me. It would be useful to know how strongly @jeremy or others would recommend that we wait til a new iteration is released or redo the course and rewatch the videos once the newer version is released.


(Mizar) #304

Greetings from México :). I’m a Computer Engineering student.