[Podcast + Writeup] Summaries + Things Jeremy Says to do + Qs

Hi Everyone!

About & Motivation behind starting this

The motivation behind this (wiki) thread is to provide a very quick summary of the lectures + “Things Jeremy says to do”-which has been a way by which we could access Jeremy’s wisdom and suggestions from the lectures.

Note, since the lectures are not public and my hosting platform doesn’t give a method of keeping audio links unlisted, I’m not sure of the best method to share audio, so I’ll share the YouTube links and then release audio once the MOOC goes live.

Update: In my pursuit to add recap to the lectures, I found myself going through the complete lecture thread, I really enjoyed the questions and decided to summarise them under individual collapsible headings too. I hope these are useful.

Lesson 1:

Podcast: To be recorded (Please check again before Lecture 2 livestream-you should see a link before then :tea:)

Summary

  • Introduction to the Top-Down Learning Methodology
  • Tips on setting up an instance: Please take the path of minimum friction. Learning DL itself is very hard, don’t add sysadmin related tasks to your starting path.
  • Introduction to Jupyter notebooks: REPL, how notebooks work. Note: it might be worth learning notebook shortcuts.
  • What is Deep Learning?
  • Defining ML & NNs
  • Introduction to terminologies: Models, Architectures, weights, updates, SGD, NNs.
  • How a model interacts with its environment might introduce a +ve feedback loop. In cases where you have a bias in your data, this leads to more bias being added.
  • Why does fastai2 use an import from * statement and why is it a bad idea, in general.
  • Using doc(): How fastai docs are created, using them.
  • Structure/Design of the API: fastai.X. X-> Application such as Tabular/NLP/Vision/Collaborative filtering
  • Why do we need a validation dataset?
  • Comparing the API for different applications

Things Jeremy says to do

  • Part of the Top-Down approach is Tenacity: Stick with the hard parts, practice.
  • Don’t just swing a bat at the ball, and “muck around”-find the bit where you’re least good at. Work on them with time.
  • It actually doesn’t matter what stack you learn, you should be able to switch in under a week-the important thing is to learn the concepts. By using an API that minimizes the boilerplate code, you can focus on the concepts.
  • Make sure you can spin up a GPU Server
  • See the code and understand how it works, use the doc function
  • Do some searching in docs
  • See if you can run the docs
  • Try to get comfortable and try to find your way around
  • Don’t move on until you can run the code.
  • Read the chapter of the book
  • Solve the questions! Rather than a summary, the book has a questionnaire to do the same thing. If you don’t get a question, come back to it later. Try to do all the parts based on what we have learned so far

Questions from Lecture 1 thread

Q: When will the book be released?
A: The official release date is July 14th, but with the pandemic, it might be pushed further

Q: Why PyTorch?
A: https://www.fast.ai/2017/09/08/introducing-pytorch-for-fastai/

Q: Which platform to use?
A: Anything you feel comfortable with (and don’t spend more than an hour trying to set up just now). The whole point of the top-down approach is to get you to do things first, then get down the rabbit hole.

Even if you are a professional, it is going to take you quite a bit of time to setup your GPU before you can use it, and you will not spend that time learning deep learning. I’d recommend only going through setting it up at the end of the course, in seven weeks, as a personal side project.

Q: How to put Pytorch models into production?"
A by giacomov:

  • The fastai inference system (good for light use or batch inference, not good for intense real-time). Super easy and covers most “side-project” kind of things, including building Web interfaces around it with Flask, for example

  • Completely hosted solutions (there are many around, you can google it). Very easy, but tend to be $$$

  • Save model as any other pytorch model, then convert that to the NVIDIA TensorRT (https://developer.nvidia.com/tensorrt) system. This is good for high-performance, parallel inference. This requires a bit of an investment to get it right the first time around, but it then becomes pretty straightforward. This is what we use in production at my company. Despite the name, you can use with it models from PyTorch, TensorFlow, Caffe’…

Q: What is the difference between jupyter, Colab, Paperspace, AWS, etc
A: Those are platforms to give you a GPU to work with. Jupyter is the programming interface where you’ll be writing/running code that uses the GPU

Q: What is the theory behind X?
A: At this part of the course, it’s recommended to focus on doing experiments, etc and coming back to the theory later :slight_smile:

Q: Difference between Hyper-parameters and parameters?
A: Hyperparameters and parameters are often used interchangeably but there is a difference between them. You call something a ‘hyperparameter’ if it cannot be learned within the estimator directly. However, ‘parameters’ is more general term. When you say ‘passing the parameters to the model’, it generally means a combination of hyperparameters along with some other parameters that are not directly related to your estimator but are required for your model.

Q: Good data labelling or annotation services?
A: There are plenty of options there. Labelbox, Amazon SageMaker, v7labs.com (focused on medical imaging, but totally usable on other things)

For text/NLP take a look at prodigy. https://prodi.gy/

SMART, which does similar stuff, but opensource.

Other mentions in the thread:

https://www.snorkel.org/ unsupervised labeling

https://www.makesense.ai/

https://github.com/tzutalin/labelImg 5 is also useful for labeling quickly. If doing a box label, you can move it VERY quickly with a keyboard and mouse.

Q: from_name_func for the high level API, but noticed the mid-level doesn’t have a matching one (it does have RegexLabeller). Wondering about the design decisions, i.e. why no FuncLabeller. Want to understand the thought process.
A: The mid level API works with a label_func. RegexLabeller is a kind of label_func.

Q: the differences between fastai and fastai_v2?
A: This course is designed to stand alone, so I’d rather not refer to previous versions. fastai v2 is a rewrite from scratch, so all the API has changed.

Q: What’s the difference between validation loss and error rate in model training output?
A: Error rate is the metric (1-accuracy) and unlike the loss (crossentropy) is not used for updating gradients in backpropagation.

the error rate is calculated on the validation data itself, and not on the training data.

Also, if you optimize your model training too much to get a good score on a particular fixed validation set, even if you haven’t used the validation data to update your model weights, it still might be seen as overfitting the validation set.

the validation loss isn’t calculated with every forward pass. After the end of one epoch on the training data, the model is evaluated on the full validation set and the validation loss is thus calculated.

For production, how the model performs on the test set is what ultimately matters most and if you find that your validation metrics are highly inconsistent with your test set results you should take more care in selecting the validation set so that it is a closer representation of the test set.

Q: Why is it called fit_one_cycle vs fit ? what is a cycle ?
A: We will learn that in a few lessons, it’s a specific kind of way of fitting, that’s all you need to know at first.

Q: Quick question - I am running the first notebook and wondering why I get the results of 2 epochs if I am only asking for one
A: That is because of fine_tune. It does a fit_one_cycle before unfreezing by default.

Important Footnote: I’m trying to beat @muellerzr in terms of “likes” :stuck_out_tongue: Please make sure you leave a like if you find this useful :slight_smile:

52 Likes

Placeholder for Lesson 2:

Summary

Things Jeremy says to do

Questions from Lecture thread

1 Like

Placeholder for Lesson 3:

Summary

Things Jeremy says to do

Questions from Lecture thread

1 Like

Placeholder for Lesson 4:

Summary

Things Jeremy says to do

Questions from Lecture thread

1 Like

Placeholder for Lesson 5:

Summary

Things Jeremy says to do

Questions from Lecture thread

1 Like

Placeholder for Lesson 6:

Summary

Things Jeremy says to do

Questions from Lecture thread

1 Like

Placeholder for Lesson 7:

Summary

Things Jeremy says to do

Questions from Lecture thread

1 Like