About the Part 2 & Alumni (2018) category

4 tips:
(1) Write down notes while re-watching lectures
(2) reproduce notebooks in the style described by @radek - Jeremy mentioned it in class
(3) learn topics on demand - videos, MOOCs, blog posts. Each fast.ai lecture is to be treated as a springboard into all the ideas/topics covered. Don’t be afraid to scour the web.
(4) try to teach others - this is very important as it solidifies your understanding

Don’t give up. It’s hard…but that’s what cutting edge is meant to be.

12 Likes

When a true master of a discipline performs an action, it seems effortless. Consider great pianists - oh how easy it seems to just sit in front of a piano, close your eyes, sway a bit and the music just flows. But then we go home and it doesn’t seem to be that easy.

Same with watching Jeremy. We see those outstanding notebooks incorporating research findings from many, many years and we think: ‘oh, this is cool and Jeremy makes it seem simple, nice!’.

What we don’t see is how much effort Jeremy - being an absolutely out of this world practitioner and teacher - put into creating the notebooks. I can only guess there have been a lot of set_trace calls involved and other development practices that we don’t see. And it is not like Jeremy has been doing this for a couple of months nor that fastai is the first library that he authored.

We see notebooks shared on the forums by students but we don’t see how much has been copied over nor how long it took us to piece things together.

Just to put things in perspective, I don’t get much of this either and I have given all this my best since last October. I’ve put countless hours into the fastai courses. This is from my last coding session:

  • I had to read how broadcasting works in numpy since I didn’t quite get it. BTW I think this comes from the book you shared on Twitter
  • I noticed in a notebook I worked on some time ago that I didn’t include a relu activation after every second layer! And I completely missed it. I spent good 20 minutes trying to figure out if it really wasn’t there and whether it should be there as the model is somewhat performing in line with expectations. Doah.
  • Reading (wrong version on top of that) of PyTorch docs to confirm if I understood what was it that they refer to as logits.
  • For the trillionth time in my life, googling for how to disable the axes when plotting images.

My other recent claims to glory involve:

  • Having to write an article so that I could semi-comfortable figure out how to use Twitter.
  • Having to write an article to convince myself to start using samples instead of training on the whole dataset and starting like a zombie at the computer screen.

There are a couple of things that could be happening. Maybe I am just not smart enough. Could be. Maybe I spend to much time on the forums instead of doing actual work. Could be.

Or maybe part of the reason why it is so tough at time is what @sermakarevich mentioned:

and more than that. We see the results of work of other people but we don’t see how much they struggle. And if they struggle even 1/4th as much as I struggle, than they struggle a lot. And I don’t think even people who perform - in my eyes - on a superhuman level place such emphasis on debugging because all of this comes easy. Don’t think it does even to them.

I think other people have given great advice here. All I wanted to say is that if you find this overwhelming you are not alone and to list some of the reasons why it all might seem like we are doing worse than others where we actually are not.

I find that for that what works best for me is playing with toy problems. If I can figure out how to train a fully connected network consisting of a layer or two on MNIST, maybe I can figure out how to train a conv net. And from that maybe I can figure out how to create a resnet.

If I can plot images of objects and bounding boxes (and I can now, because nearly most of the time I spent on lesson 8 went into this), maybe I can get the model to output 4 numbers and see if it learns anything. And if I can do this maybe over this week I will be able to implement something from what Jeremy covered in lecture 9. And if I don’t - oh well, maybe I will need another two months after this course to finish the material (I still haven’t finished part 1 cause I got sick). As long as I have not given up and I am moving forward even if at what seems to be as a snail’s pace, I think I am doing good. (BTW, I am not only writing these words to convince you only, but to also convince myself that all is well :slight_smile: )

27 Likes

That’s a great motivation.

Thanks a lot and i surely understand the pain Jeremy would have gone through to create a notebook which runs without complaining because I am self taught in this field completly.

I understood this pain when for the 1st time in my life I had participated in a hackathon at Analytics-vidhya and won it…(it took me 4 days to build a model i.e a nbs which ran completely).

But it would be great if Jeremy reveals his scratch notebook…

Thank you all again, I muster some of my lost confidence back …

Enjoy Deep Learning…

2 Likes

That is quite an achievement, congrats! :slight_smile:

1 Like

Follow 3 steps of eternal learning approach. Here are the steps

  1. Listening - Why - To address “I don’t know” - Get all inputs without any filters
  2. Reflection - Why - To address “I don’t understand” - At this step try to get all your doubts cleared / understand from all aspects
  3. Contemplation / Mediation - Why - To address “I understand but I don’t have any experience” - To clear opposite understanding and you have become one with knowledge.
    If it in not your understanding / experience - Repeat above steps till you and knowledge have become one.
2 Likes

This is a great discussion. It took me months to create this notebook. Not months of progress, but months of continuous failure. The difference between those who succeed and those who don’t is the ones that succeed didn’t give up! :slight_smile:

Also, I spend at least 50% of every day learning and/or practicing new things, and have done so since I made that commitment at the age of 18. (Nowadays it’s about 90% of my day). I don’t watch TV or play computer games or get lost in social media so I maximize the time I spend on things I care about. So over time I’ve gotten faster at doing stuff, since I’ve been practicing and learning lots.

I don’t have a “from scratch” notebook to share, since I’m continuously refactoring and experimenting within one notebook, and what you see is the result. But @radek has done a good job in earlier posts documenting some of the kinds of tools and approaches I use.

One of our MOOC students, Louis Monier, is known as the “father of internet search” - he was the CTO of the first big web search company (Alta Vista). He’s a pretty smart guy! He told me he had watched the part 1 (2017) videos so many times that he knew much of them off by heart. He also practiced on a home deep learning project whilst watching the videos, and IIRC he spent something like 6 months working on that. If Louis needs to study this much, then the rest of us should probably expect to work even harder if we want to master the material.

Today, Louis is the head of the AI Lab at AirBnB. So I guess the hard work can pay off… :wink:

71 Likes

Since we are on the topic of learning in this thread, I wanted to share this with everyone here - without creating an all new thread for it. @ecdrid I think this will also provide some good source of motivation for you!

This is an interview just from yesterday with Pytorch creator Soumith Chintala - he is also a Facebook AI Researcher. If anyone here has the time, I definitely highly recommend watching it as he had some really good insights. BTW the two who interviewed before and after him (Eric Schmidt and Marc Andreessen) also had some pretty awesome things to say.

Some key takeaways:

"

How did he get started in Deep Learning & Neural Networks?

  • Soumith went to study at NYU because he found out about Yann Lecun (after doing a google search). Ended up meeting Yann (simply by emailing him and asking! :slight_smile: ). Soumith admitted to Yann that he knew nothing about neural networks so Yann offered help with one of his Phd students. When Soumith graduated from NYU he couldn’t find any jobs in DL since it wasn’t really a “thing” at the time and positions for DL engineers didn’t exist. Later he realized that Yann had co-founded a startup and was using neural networks so Soumith ended up initially working there. Eventually, Yann was recruited to head up the FB AI Research Lab and he invited Soumith to join him (once again). - KEY TAKEAWAY: Don’t be afraid to admit when you don’t know something (be honest with yourself and others - don’t pretend to know) and most importantly don’t be afraid to ask for help! All it took was Soumith taking that first step in emailing to Yann that kickstarted his whole career path and eventually creating Pytorch. Of course, this is not to say that he has most definitely earned everything that he has gained today and must have worked extremely hard to get to the level he is currently at. :sweat_smile:

What does he recommend for people who want to start a career in this field?
You don’t need a 4 year education to learn about and become good at deep learning and neural networks (high school math level (as jeremy loves to say :slight_smile:) . You definitely don’t need a Phd. No-one on the Pytorch team had a Phd and they still turned out fine. However, doing a course is not enough, you have to apply what you learn. That means creating Github projects, writing blog posts, doing Kaggle competitions, reading & implementing research papers. Don’t just read papers or consume information, you have to put it in practice and also you have to keep your skills sharp. (Oh wait that sounds a lot like what we are doing here at Fastai :slight_smile:) , etc. As a result of doing all this, you are sending signals to others that you have the skills they need. Other people won’t know what you know unless you show them.

What projects is he most interested in?
He likes generative models a lot - GANs

What do you think is the future of this field?
The problem right now is that we still need a lot of data for the neural networks to do well in most situations. One thing we really have to figure out is how are humans able to generalize like we do and pick up on things so quickly with so little data. Is it transfer learning, that humans can transfer all of their skills to generalize to new tasks with very little data? Is the answer that we all should just be doing unsupervised learning where nothing is labelled and neural net should just be able to figure things out on its own? This is all research that is being worked on.

tfw you just realized you wrote a @radek post :slight_smile: (btw your posts are legendary:+1:)

30 Likes

This is just absolutely amazing. Thanks for sharing @jamesrequa.

My personal experience about PhD is that there is some gap between academia and business here in Poland. I tried to enter the program twice but both times after initial talk with assigned professor I realise we are not speaking same language. Also there was no evidence PhD study gonna be better experience than just actively studying here and there and participating in different competitions and working on different projects. But maybe that was because of me as I am mechanical engineer and academia was looking for CS students or similar. Still, internal feeling says me if I have an opportunity I would work on PhD.

7 Likes

Just thought I’d add a few of my thoughts after some excellent follow up posts above:

I think quite a few people are probably feeling this way at times as well (me included).

But several things keep me motivated:

I have overcome many (many) obstacles and problems that I didn’t know how to solve, and generally found that it’s just a matter of time. It may be some programming problem that takes a few hours or days to work out, or for example trying to work out a few years ago how html, css and javascript worked to build web pages (that took me weeks to get comfortable with). When I get stuck I tell myself - youve been stuck on things before and you kept chipping away at it and you worked it out. Keep working at it and you’ll solve this as well.

The way I have been tackling this course, as someone with a full time job and family, is to dedicate all my free time to this course and material related to just the part of the course we are working on. This involves: listening to relevant part 1 lectures on my ride to and from work (part 2 repeats shortly), reading the fasi.ai forum in my down time after work, and reading relevant papers and blogs mentioned in the forum after family has gone to bed (Im too tired during the week to fire up my machine to do any coding), and keeping print outs of relevant DL papers handy at home for when browsing on a device is not the right thing to do.

With regards to notebooks, the general consensus I have read here is to reproduce the nb from scratch. I have been trying at least to reproduce from notes the sections I am familiar with - eg model, transforms, learner) and re-typing out the bits I am not as familiar with - until it does become familiar, which may take a few repeats.

I have found that the bits I got stuck with when repeating part-1 in prep for this course, and the bits that weren’t fully covered - like submitting to kaggle, were the parts that when I re-listened to the lecture (on my way way to work) - I then thought - hah I already know that.

Lesson 9 covers so many new (to me anyway) concepts that Im going have to break down into pieces and iterate over a few times.

6 Likes

Thanks a Lot !!!
The real session with Soumith starts at close to 2hrs 4 mins

That’s a great share @jamesrequa. I was there at the Intersect on Tuesday and some of the talks were particularly useful. While Soumith’s was fantastic, personally I found the “Competing with Skills, Winning with confidence” super relevant and very informative. There were so many key takeaways. For candidates who are job hunting, switching domains, aspiring to get better at their own niche domain etc. I’m planning to write on on it but here’s a brief of what they mentioned.

The talk is available at: https://youtu.be/qnjnZzAegXs?t=7306

I made some notes. I apologise if it isn’t clear. Made some notes that follow:

  • Whenever you don’t understand something, mapping it back to the domain you know and understanding it from first principles always helps. (Elon Musk is a big proponent of thinking in first principles).
  • Even super-confident people sometimes do not know what they’re talking about. It just means they know a few things about something, certainly not in its entirety but most importantly they are quick to course-correct incase they see a mistake in their understanding.
  • Part of confidence is failing. A lot. (Jeremy always mentions this. 99% of the time the code doesn’t work but after a while, you’re sure of what works and what doesn’t and understand the concept well.)
  • Find self trust. It’s key to being open and gaining confidence over time.
  • Get a lot of feedback. Why do you think the greatest of the athletes / teams have coaches? Because, it just works. Feedback is the most important experience in learning something. (We Machine Learning guys surely know the value of feedback) :wink:
  • Ability and humility to ask.
  • Have a growth mindset instead of a fixed mindset. (Growth is always dx everyday, however it always amounts to something really valuable over time)
  • Getting a job is a marketing exercise. Write blogs, GitHub is your best friend, create your own portfolio. More importantly, be authentic.
  • Be an excellent story teller. They recruiters are not trained to connect your dots. So, figure out your story and present it in the best way possible.
  • Shorten the distance to your dream company: Figure out the engineering blogs, identify the team members on Twitter, talk to them about any interesting insights you found on their blogs, and establish meaningful conversations. Don’t blindly run behind recruiters.
  • Evaluate the work and team you are dreaming of. Be critical.
19 Likes

It was super nice to read the authenticity moment of Aditya and all your insights and reassurances. :slight_smile:

It inspired me to ask the following questions. Is anyone else struggle with this, and how do you fix it?

  • I get a better intuition/understanding of the code whenever I write it down (with pen and paper). Of course if I have any puzzles I will try them coding them, but then again I will return the results to the pen and paper.
  • So this applies also when taking notes, I will write down everything that seems important
  • I tried replacing it with type-writing in the doc - but it’s not the same

All this process makes me think that I’m a slow learner, and I need too much time to understand/memorize things.

I look forward to see if you have any tips for it :grinning:

4 Likes

IMO this all about focusing and concentration. Keyboard writing I would think “easier” than hand writing (plus it is harder to draw on the fly in MS Word). I.e. it is easier to type and think about something else, than handwrite and think not about subject.

When we think about one thing while doing something else, it is always our thoughts, which are the focus of attention. This suggests that there are at least two thresholds, the higher associated with overt movement and the lower with thought.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4879139/

But hand writing is slower obviously.

What worries me as well that in current world pace - “old school” methods to digest huge amounts of content becomes too slow, and it has to be some more “modern” and revolutional way to keep level of focus and comprehension.

2 Likes

I recall seeing some research recently that found that on average hand-writing notes led to better comprehension and retention, FYI.

8 Likes

Being an Engineering student, If you can make your own self-written notes, you know exactly what is to be found out where(so it actually saves Time), there can never be substitute for that because the feeling when you write down something with a Pen is awesome, the same isn’t with Typing the same on a doc…

It might seem a complete waste of Time initially, but it isn’t… Scribbling on Paper’s is what we learn first…

Recently itself i found a lot of cool things bash shell can do it for you, Everytime it was usually copy and paste but now the things are completely different, If i wont type them, then i will never remember them no matter what’s the length of the command, I prefer typing now…

If you feel low on confidence,then

  • Just to re-mind you “Pen is mightier than Sword…”
  • This Quote Block from Radek
    ---- When a true master of a discipline performs an action, it seems effortless. Consider great pianists oh how easy it seems to just sit in front of a piano, close your eyes, sway a bit and the music just flows. But then we go home and it doesn’t seem to be that easy.
4 Likes

How do I keep going? Understanding that what Jeremy teaches is a culmination of years of expertise in the field. Trying to sift through all the courses in 7-8 weeks (Part 2 Especially) is a bit of a stretch. Might take months or a year to get through.

Learning on a know-how basis. Instead of working on assignments/problems given in notebooks. I try to look at various competitions to implement what has been taught. Example: To learn bounding boxes, I’m going to take ongoing kaggle competition to find where the clothes are.

Have a much bigger goal: Important: When you have a much bigger goal that you want to achieve with deep learning, your approach towards viewing difficult(presumed to be difficult) things change. You develop (my opinion) a change in viewing things to “if we cannot comprehend this, how can we achieve our bigger goal?”

3 Likes

Aside from the fantastic course materials, one of the key takeaway I :blue_heart: about this course is “learning how to learn” and improving my learning process along the way by learning from others through their super nice posts/articles/blogs in the community and in this forum. It’s really inspiring to read and keep me stay on the course when I am struggling.

5 months ago, I bite the bullet and quit my full-time job to master the material with the goal of getting a job in the “Software Engineering 2.0” field. Prior to that, I am studying much of Stanford’s MOOCs and fast.ai first version of the courses but while I did get stuff working and got results, I was left unsatisfied. During that 5 months time-frame, I stayed focused and relearned every single fast.ai lessons. I watched the videos so many times (at 1.5x to 2.0x speed, jump around the video using the time code and text transcript). Along the journey, I work on the assignments throughly, dive deeper and deeper into theory by reading papers, read articles highlighted in the lessons, writie my personal notes, practice on my own project (most important), and joined study group mainly to teach (as a way to solidify my knowledge). I never expected that I really need to study this much to achieve what I intended. And the learning continue. So, never give up and don’t underestimate the time commitment.

16 Likes

Edited out of here to a different topic, as I decided the question is irrelevant for a sticky :slight_smile:

Hi all, I am unable to use the notebooks in v0.7 virtual env (named fastai). fastai v1.x is installed in the vm. below is the thread: