Help wanted: YouTube chapter markers

Folks, if you’ve got a few moments I’d really appreciate some help adding YouTube chapter markers for each lesson. Would you be able to reply with the time stamps for as much or as little of one of the lessons as you have time for? Have a look at previous replies to see what time-ranges are already covered, so that you don’t double-up on other people’s work. See the first reply for an example of what these look like.

When time stamps are added to a YouTube video, it allows viewers to easily jump to sections of the video, which is very helpful! For example, this is what the 2020 course looks like when I mouse over the time bar of the video - it shows the name of the chapter just above the bar:
image

It also shows clickable links for each “chapter” in the description:

image

I’ll add credits to the video description on YouTube for everyone that helps.

4 Likes

Lesson 1 Practical Deep Learning for Coders, Lesson 1 - YouTube

00:00 - Introduction
00:25 - What has changed since 2015
01:20 - Is it a bird
02:09 - Images are made of numbers
03:29 - Downloading images
04:25 - Creating a DataBlock and Learner
05:18 - Training the model and making a prediction
07:13 - Watch on course.fast.ai
09:20 - What can deep learning do now

09:35 - Dall-e 2
12:33 - Pathways Language Model (PaLM)
17:40 - How the course will be taught. Top down learning
21:25 - Jeremy Howard’s qualifications
24:38 - Comparison between modern deep learning and 2012 machine learning practices
26:31 - Visualizing layers of a trained neural network
29:40 - Image classification applied to audio
30:08 - Image classification applied to time series
30:19 - Image classification applied to user input for fraud detection
32:16 - Pytorch vs Tensorflow
33:43 - Example of how Fastai builds off Pytorch (AdamW optimizer)
37:18 - Using cloud servers to run your notebooks (Kaggle)
40:45 - Bird or not bird? & explaining some Kaggle features
42:15 - How to import libraries like Fastai in Python
42:42 - Best practice - viewing your data between steps
44:00 - Datablocks API overarching explanation
46:40 - Datablocks API parameters explanation
50:40 - Where to find fastai documentation
51:54 - Fastai’s learner (combines model & data)
52:40 - Fastai’s available pretrained models
54:02 - What’s a pretrained model?
54:50 - fine_tune method and how it applies to your data
55:48 - Testing your model with predict method
57:08 - Other applications of computer vision. Segmentation
58:48 - Segmentation code explanation
1:00:32 - Tabular analysis with fastai
1:01:42 - show_batch method explanation
1:03:25 - Collaborative filtering (recommendation system) example
1:07:08 - How to turn your notebooks into a presentation tool (RISE Jupyter IPython Slideshow Extension)
1:07:45 - What else can you make with notebooks?
1:10:06 - What can deep learning do presently?
1:12:33 - The first neural network - Mark I Perceptron (1957)
1:14:38 - Machine learning models at a high level

7 Likes

Lesson 2 Practical Deep Learning for Coders, Lesson 2 - YouTube
00:00 - Introduction
00:55 - Reminder to use the fastai book as a companion to the course
02:06 - aiquizzes.com for quizzes on the book
02:36 - Reminder to use fastai forums for links, notebooks, questions, etc.
03:42 - How to efficiently read the forum with summarizations
04:13 - Showing what students have made since last week
06:45 - Putting models into production
08:10 - Jupyter Notebook extensions
09:49 - Gathering images with the Bing/DuckDuckGo
11:10 - How to find information & source code on Python/fastai functions
12:45 - Cleaning the data that we gathered by training a model
13:37 - Explaining various resizing methods
14:50 - RandomResizedCrop explanation
15:50 - Data augmentation
16:57 - Question: Does fastai’s data augmentation copy the image multiple times?
18:30 - Training a model so you can clean your data
19:00 - Confusion matrix explanation
20:33 - plot_top_losses explanation
22:10 - ImageClassifierCleaner demonstration
25:28 - CPU RAM vs GPU RAM (VRAM)
27:18 - Putting your model into production
30:20 - Git & Github desktop
31:30 - For Windows users
37:00 - Deploying your deep learning model
37:38 - Dog/cat classifier on Kaggle
38:55 - Exporting your model with learn.export
39:40 - Downloading your model on Kaggle
41:30 - How to take a model you trained to make predictions
43:30 - learn.predict and timing
44:22 - Shaping the data to deploy to Gradio
45:47 - Creating a Gradio interface
48:25 - Creating a Python script from your notebook with #|export
50:47 - Hugging Face deployed model
52:12 - How many epochs do you train for?
53:16 - How to export and download your model in Google Colab
54:25 - Getting Python, Jupyter notebooks, and fastai running on your local machine
1:00:50 - Comparing deployment platforms: Hugging Face, Gradio, Streamlit
1:02:13 - Hugging Face API
1:05:00 - Jeremy’s deployed website example - tinypets
1:08:23 - Get to know your pet example by aabdalla
1:09:44 - Source code explanation
1:11:08 - Github Pages

Note: This is the last lesson I’ll be doing chapter markers for. Jeremy mentioned last night that these can be done in about 20 minutes. For me, each lesson takes 3-4 hours to watermark to the level that was demonstrated in the second post (even watching videos back at 2x speed) so I may be doing something wrong. Will let someone else take a stab at them from now on. Perhaps develop more efficient processes!

3 Likes

Awesome! I’ve added those to the lesson now :slight_smile:

1 Like

I love the detail resolution on your time stamps, so the time you spent on it was well worth it for all eternity of users who will watch this video! Many thanks!

I’ve been trying to do the transcript corrections and I also find that I’m a bit slow so I was actually listening to the videos at 0.75x and felt I was still backtracking a bit (it’s a little different than time stamps but I can see how it can take some time to do it right.)

Cheers.

2 Likes

Thank you for all the hard work you put into this. You’ve certainly done it at a high level of resolution.

For folks working on chapter markers in the future, I’ve found it helpful to keep hitting the right-arrow key until the video hits a new slide or part of a notebook. That’s generally when I’m discussing a new topic, so is a good place to add a chapter marker.

I suspect going thru lessons carefully enough to add chapter markers will help your brain remember what’s in them and how they’re organised, so hopefully it’s time that’s quite well spent!..

6 Likes

I am with @Raymond-Wu that it takes hours rather than 20 minutes to go through a single lecture and get the markers done. In my case when I did the makers for lecture 1, it sometimes took me more than 2 hours for just 20 minutes of the lecture content. So, it seems something terribly wrong when I compare my hours with Jeremy’s 20 minutes. Fortunately, it is not.

Looking back on the hours I spent on watching lecture 1, I was trying to fully appreciate the 1h 25m lecture which distills Jeremy’s decades of research and hard working. In fact, it helps to recall much of the learning I had done in version 2018 and version 2019. Like Jeremy said, the time is well spend, and actually I find myself enjoy pouring even more hours into each lecture.

Of course, if we set apart the time doing markers and time for understanding and recalling, the actual time for markers may only take 20 minutes like Jeremy said. But it is no fun to watch lectures just for markers.

Recently I developed a fun way to watch lectures, and this is why I liked very much the part Jeremy said about questionnaires in lecture 1. When I watch lectures, I like to do markers in order to separate lectures into interesting segments, and to help me appreciate the content in the segment, I reframe it into multiple questions of my own. It not only enhance my understanding, but also help me recall better. But it is not without a problem for marker per se.

To do markers this way, I realize that my markers become very personal, it largely reflect on the particular details and insights I found enjoyable and valuable, at least it may be too detailed. And when I come back at my markers and the lecture again later, I may add further changes to it. Is this kind of markers too detailed for the markers in your mind @jeremy ? (see my Lecture 1 markers below)

  • 00:00 Welcome
    Welcome to Part 1 2022 course
  • start=00:25&end=01:20 Computers can tell birds was a joke
    Were computers smart enough to determine photos of birds before 2015?
  • start=01:20&end=02:09 Download and display an image
    How to download and display a photo of a bird from DuckDuckGo using simple codes?
  • start=02:09&end=03:20 Images are numbers
    What photos/images are actually made of, at least for computers?
  • start=03:20&end=04:10 Download and resize images
    How to create two folders named ‘bird’ and ‘forest’ respectively under a larger folder ‘dest’? How to download 200 images for each category? How to resize and save those images in respective folders?
  • start=04:10&end=04:25 Remove broken images
    How to find broken images and then remove or unlink them from their folders?
  • start=04:25&end=05:10 Prepare the data
    How to create a DataBlock which prepares all the data for building models? How to display the images in a batch?
  • start=05:10&end=06:07 Build and Train a model
    How to build a model and train/finetune it on your local computer?
  • start=05:55&end=07:09 Within 2 minutes you made the joke true
    How to predict or classify a photo of bird with a model?
  • start=07:09&end=07:56 Colab and others
    How to get started running and playing around the codes and models immediately and effortlessly?
  • start=07:56&end=08:37 Questionnaires first
    Why should you read lecture questionnaires before studying the lecture?
  • start=08:37&end=09:22 Searchable Lecture videos
    How do you search and locate a particular moment inside a lecture video?
  • start=09:22&end=12:33 Models turn your words into masterpieces
    Can you create an original masterpiece painting by simply utterring some artistic words?
  • start=12:33&end=13:49 Models can explain
    Can you believe that models today can explain your math problems not just give you a correct answer? Can you believe that models today can help you get a joke?
  • start=13:49&end=14:20 Data Ethics
    Do you know Rachel Thomas has taught a course on practical data ethics?
  • start=14:20&end=16:33 Hey, how are you?
    Jeremy and fastai community make serious effort in help beginners continuously.
  • start=16:33&end=17:41 Superstar alumni with tips
    Do you want to know how to make the most out of fastai?
  • start=17:41&end=20:01 Learn naturally
    Do you know people learn naturally (better) with context rather than by theoretical curriculum? Do you want this course to make you a competent deep learning practitioner by context and practical knowledge? If you want theory from ground up, go to part 2 fastai 2019
  • start=20:01&end=21:25 Course textbook
    Do you know that learning the same thing in different ways betters understanding?
  • start=21:25&end=24:38 Take it seriously
    Why you must take this course very seriously? (Personally, I think it’s truly a privilege to be taught by Jeremy and to be part of the fastai family. I didn’t appreciate it enough as I should 4 years ago.)
  • start=24:38&end=26:19 Create features manually
    Why did we need so many scientists from different disciplines to collaborate for many years in order to design a successful model before deep learning?
  • start=26:19&end=29:14 Create superior features automatically
    Why can deep learning create a model to tell bird from forest photos in 2 minute which was the impossible before 2015? Would you like to see how much better/advanced/complex are the features discovered by deep learning than groups of interdisciplinary scientists?
  • start=29:14&end=30:45 Turn sound, time series, movement into images
    Are all things are data, sound, time (series), movement? Are images are just one way of expressing data? Why not store or express data (of sound, time, movement) in the form of images? Can imaged based algos learn on those images no matter how weird they appear to humans?
  • start=30:45&end=32:16 Transfer learning liberate DL for everyone
    Can I do DL with no math (I mean with high school math)? Can I train DL models with hand-made data (<50 samples)? Can I train state of art models for free (literally)?
  • start=32:16&end=33:43 Hi, Pytorch! (Farewell, Tensorflow)
    Which should I invest my life in DL software field, Pytorch or Tensorflow?
  • start=33:43&end=35:50 Fastai = Pytorch + best practice
    Why should you use fastai over pure pytorch? Don’t you want to write less code, make less error, achieve better result? Don’t you want a robust and simple tool used by your future colleagues and bosses?
  • start=35:50&end=40:35 Jupyter Notebook = Code + Write + Run on cloud
    Why is jupyter notebook the most loved and tested coding tool for DL? Do you want Jeremy to show you how to use Jupyter notebook hand by hand?
  • start=40:35&end=41:22 Jupyter on cloud: first best practices
    How to make sure your notebook is connected in the cloud? How to make sure you are using the latest updated fastai? #best-practice
  • start=41:22&end=43:56 Get started with the bird/forest notebook
    Doesn’t fastai feel like python with best practices too? How to import libraries to download images? How to create and display a thumbnail image? Always view your data at every step of building a model #best-practice How to download and resize images? Why do we resize images? #best-practice
  • start=43:56&end=45:32 Myth - data massaging vs model tweaking
    Why a real world DL practitioner spend most of the valuable/productive time preparing data rather than tweaking models? Can super tiny amount of models solve super majority of practical problems in the world? Have fastai selected and prepared the best models for us already?
  • start=45:32&end=46:10 Best practices from other languages added
    Does Jeremy add best practices of other programming languages into fastai? Jeremy loves functional programming
  • start=46:10&end=50:31 DataBlock: what does it do inside #best-practice
    How fastai design team decide what tasks should DataBlock do? task 1: Which blocks of data do DataBlock need to prepare for training? task 2: How should DataBlock get those data, or by what function/tool? task 3: Should we always ask DataBlock to keep a section of data for validation? task 4: Which function or method should DataBlock use to get label for y? task 5: Which transformation should DataBlock apply to each data sample? task 6: Does dataloader do the above tasks efficiently by doing them in thousands of batches at the same time with the help of GPUs?
  • start=50:29&end=51:27 Docs: Tutorials and API
    What is the most efficient way of finding out how to use e.g., DataBlock properly? How to learn DataBlock thoroughly?
  • start=51:27&end=52:33 What is a Learner
    What do you give to a learner, e.g., vision_learner?
  • start=52:33&end=53:46 TIMM: largest collection of CV models
    Is fastai the first and only framework implement TIMM? Can you use any model from TIMM in your project? Where can you learn more of TIMM?
  • start=53:46&end=54:51 Resnet 18: a pretrained model
    What is a pretrained model, Resnet18? What did this model learn from? What come out of this model’s learning? or what is Kaggle downloading exactly?
  • start=54:51&end=55:34 Fine tuning
    What exactly does fine tuning do to the pretrained model? What does fine-tuning want the model learn from your dataset compared with the pretrained dataset?
  • start=55:34&end=56:48 Prediction
    How to use the fine tuned model to make predictions?
  • start=56:48&end=58:49 Other CV model uses: Segmentation
    Can we fine tune pretrained CV models to tell us the object each and every pixel on a photo belong to?
  • start=58:49&end=01:00:31 Specialized DataLoaders
    Why do we need specialized DataLoaders like SegmentationDataLoaders given DataBlock?
  • start=01:00:31&end=01:03:22 Non-CV: Tabular analysis
    What can tabular analysis do? Can we use a bunch of columns to predict another column of a table? How do you download all kinds of dataset for training easily with fastai? untar_data What are the parameters for TabularDataLoaders? What is the best practice show_batch of fastai learned from Julia (another popular language)? Why to use fit_one_cycle instead of fine_tune for tabular dataset?
  • start=01:03:22&end=01:06:54 Non-CV: Collaborative filtering
    Can we use collaborative filtering to make movie recommendations for users? How does recommendation system work? Can collaborative filtering models learn from data of similar music users and recommend/predict music for new users based on how similar they are to existing users?
  • start=01:04:34&end=01:06:53 Recommending with collaborative filtering
    How to download dataset for collaborative filtering models? How to use CollabDataLoaders? How to build a collaborative filtering model with collab_learner? What is the best practice for setting y_range for collab_learner? #best-practice If in theory no reason to use pretrained collab models, and fine_tune works as good as fit or fit_one_cycle, any good explanations for it? #question How to show results of this recommendation model using show_results?
  • start=01:06:53&end=01:10:06 Jupyter Notebook: everything you need
  • start=01:10:06&end=01:12:33 Deep learning: its present capacity span
    What can Deep Learning do at the present? What are the tasks that deep learning may not be good at?
  • start=01:12:33&end=01:13:21 First neuralnet model in 1959
    Has the basic idea of deep learning changed much since 1959?
  • start=01:13:21&end=01:14:37 Programs before machine/deep learning
    What did we write into programs/models before deep learning? How to draw chart in jupyter notebook?
  • start=01:14:37&end=01:20:25 Deep learning theory in 5 minute
    What is a model? What are weights? How do data, weights and model work together to produce result? Why are the initial results are no good at all? Can we design a function to tell the model how good it is doing? loss function Then can we find a way to update/improve weights by knowing how bad/good the model is learning each time from the data? If we can iterate the cycle multiple times, can we build a powerful model?
  • start=01:20:28&end=01:25:08.70 Homework
    Run notebooks, especially the bird notebook. Create something interesting to you based on the bird notebook. Read the first chapter of the book. Be inspired by all the amazing student projects.
2 Likes

Yeah sounds like my bad for not quite realising that I’m obviously going to do this much faster than other people because as soon as I see a frame from the video I know exactly what it’s about!

IMO, yes it is – I don’t think it’ll be so easy to use in the YouTube time selector, because there’s so many markers and the sections get really small. I think generally people want to jump to a more general topic, rather than a very specific bit of a topic. Especially since on the course website viewer people can use the transcript search to jump to a timestamp. Also, long chapter marker test won’t be visible in the YouTube viewer.

(Having said all that, I think your detailed map of the lesson is really helpful for folks wanting to remember what was covered, and make sure they didn’t miss anything. So if you’re planning to write these detailed notes each week for your own purpose, please do pop them here in this thread as well so that students can benefit from them!)

BTW, you don’t need the “start=” … “end=” bits - just the mm:ss timestamp itself is needed.

2 Likes

Of course, I will sure keep posting these long detailed notes. And I will use the timestamp as you suggested next time.

1 Like

You should probably do the first 10 minutes or so of Lesson 3 to show what you mean. Your original example was how incredibly detailed containing markers every 30 seconds to 2 minutes.

Good point. I guess the start of lesson 1 moved pretty fast! Here’s the first ~20 mins of lesson 3:

00:00 Introduction and survey
01:36 “Lesson 0” How to fast.ai
05:28 Highest voted student work
09:00 Paperspace and JupyterLab
12:15 Making a better pet detector
13:52 Which image models are best
19:35 Creating the application
21:24 What’s in a model

1 Like

Lecture 2 (revised on May 23st)

Let’s turn your model into a web app

00:00 New exciting content to come

  • Can there be substantial new content given we have already 4 versions and a book?

00:57 Ways of reading the book

  • How many channels available for us to read the book? (physical, github, colab and others)

01:28 Extra sweets from the book

  • Are there interesting materials/stories covered by the book not the lecture?

  • Where can you find questionnaires and quizzes of the lectures?

02:06 aiquizze.com

  • Where can you get more quizzes of fastai and memorize them forever?

02:38 Introducing the forum

  • How to make the most out of fastai forum?

04:12 Students’ works after week 1

06:08 A Wow moment

  • Will we learn to put model in production today?

06:46 Find a problem and some data

  • What is the first step before building a model?

07:07 Access to the magics of Jupyter notebook

  • Do you want to navigate the notebook with a TOC? #jupyter

  • How about collapsable sections?

  • How about moving between start and end of sections fast?

  • How to install jupyter extensions

09:48 Download and clean your data

  • Why use ggd rather than bing for searching and downloading images? #code

  • How to clean/remove broken images?

11:06 Get to docs quickly

  • How to get basic info, source code, full docs on fastai codes quickly?

12:40 Resize your data before training

  • How can you specify the resize options to your data? #code

  • Why should we always use RandomResizedCrop and aug_transforms together? #best-practice

  • How RandomResizedCrop and aug_transform differ?

16:56 Data images instantly transformed not copied

  • When resized, are we making many copies of the image? #best-practice

17:54 More epochs for fancy resize

  • How many epochs do we usually go when using RandomResizedCrop and aug_transforms? #best-practice

18:58 Confusion matrix: where do models get wrong the most?

  • How to create confusion matrix on your model performance? #code

  • When to use confusion matrix? (category) #best-practice

  • How to interpret confusion matrix?

  • What is the most obvious thing does it tell us? #question

  • How hard is it to tell grizzly and black bears apart?

20:22 Check out images with worse predictions

  • Do plot_top_losses give us the images with highest losses? #code

  • Are those images merely ones the model made confidently wrong prediction? #best-practice

  • Do those images include ones that the model made right prediction unconfidently?

  • What does looking at those high loss images help? (get expert examination or simple data cleaning)

22:08 What if you want to clean the data a little

  • How to display and make cleaning choices on each of those top loss images in each data folder? #best-practice

  • Without expert knowledge on telling apart grizzly and black bears, at least we can clean images which mess up teddy bears.

24:44 Myth breaker: train model and then clean data

  • How can training the model help us see the problem of dataset? #best-practice

  • Won’t we have more ideas to improve the dataset once we spot the problems of the dataset? #surprise

25:23 Turn off GPU when not using

  • How to use GPU RAM locally without much trouble?

26:17 Watch first, then watch and code along

  • What is the preferred way of lecture watching and coding by majority of students?

27:19 A Gradio + hugging face tutorial

30:19 Git and Github desk

  • Is Github desk a less cool but easier and more robust way to version control than git?

31:31 Terminal for windows

  • How to set up terminal for windows?

  • Why Jeremy prefer windows than mac? #surprise

29:00 Get started with Hugging Face Spaces

33:45 Get the default App up and running

  • How to use git to download your space folder?

  • How to open vscode to add app.py file?

  • How to use vscode to push your space folder up to hugging face spaces online?

  • then go back to your space on Hugging Face to see the app running

37:10 Train and download your model

  • Where is the model we are going to train and download from Kaggle notebook?

  • How to export your model after trained it on Kaggle? #code

  • Where do you download the model?

  • How to open a folder in terminal? open .

  • Make sure the model is downloaded into its own Hugging Face Space folder

41:15 Predict with loaded model

  • How to load downloaded model to make prediction? #code

  • How to make prediction with the loaded model?

  • How to export selected cells of a jupyter notebook into a python file?

  • How to see how long a code runs in a jupyter cell?

44:22 Turn your model into Gradio App locally

  • How to prepare your prediction result into a form gradio prefers? #gradio #code

  • How to build a gradio interface for your model?

  • How to launch your app with the model locally?

  • Not in video: run the code on Kaggle in cloud

48:25 Push this app onto Hugging Face Spaces

  • Make sure to create a new space first, e.g., testing

  • How to turn the notebook into a python script?

  • How to push the folder up to github and run app in cloud?

  • Not in Video: if stuck, check out Tanishq tutorial #trouble-shooting

51:46 How many epochs are ideal for fine tuning?

#best-practice #fine-tuning

53:15 How to save model from colab?

54:24 How to install fastai properly

  • #installation #trouble-shooting #code

  • How to download github/fastai/fastsetup using git? git clone https://github.com/fastai/fastsetup.git

  • How to download and install mamba? ./setup_conda.sh

  • Not in Video: problem of running ./setup_conda.sh

  • How to download and install fastai? mamba install -c fastchan fastai

  • How to install nbdev? mamba install -c fastchan nbdev

  • How to start to use jupyter notebook? jupyter notebook --no-browser

  • Not in Video: other problem related to xcode

59:48 The workflow summary

01:01:04 HuggingFace API + gradio + Javascript = real APP

01:02:42 How easy does HuggingFace API work

01:04:43 How easy to to get started with JS + HF API + gradio

01:07:20 App example of having multiple inputs and outputs

01:08:09 App example of combining two models

01:09:28 How to turn your model into your own web App with fastpages

01:14:09 How to fork a public fastpages for your own use

Common problems Not in Video

3 Likes

This is great @Daniel – if you do this for each lesson, I can easily modify them into Youtube-ready format! :slight_smile:

1 Like

Sure, this way of watching videos helps me a lot, I will keep posting more notes for future lectures.

Lecture 0 How to do fastai

The best approaches to do the course

00:00 Two groups of students in general

02:01 The fastai book

02:55 The course part 1 + part 2 = the book

03:22 Finish the course!

  • Finish at least part 1 of the course

  • Set the goal to finish the course

04:27 Finish a project!

  • focus on one project

  • keep polishing it until it is perfect

05:49 What can a project be?

06:53 Be tenacious!

08:27 Radek Osmulski story

10:15 Stop endlessly preparing for doing deep learning

  • What do ‘hard-working’ people do to prevent themselves from actually doing the course?

11:16 What will fastai teach you

  • Does fastai teach everything (from practice to theory to source code) you need to succeed in AI?

12:25 How to get started with coding

  • CS50 from Harvard is a good one

  • but more important than a coding course is the approach and attitude you take

14:00 The missing semester of your CS education

4 pillars to build your coding skill

15:30 Share your work or learning

  • What to share?

  • Why to share?

  • e.g., documenting your learning for newcomers and share them, people from similar background may find very useful, and in the end it will benefit yourself tremendously

17:25 four steps to do fastai lessons

  • watch video

  • experiment notebooks (do DL to your own brain)

  • reproduce the notebook from scratch (or from notes to codes, from codes to notes)

  • apply what you learnt to your own dataset

  • don’t have to do 4 steps in one go, be with your own pace

20:32 Notebook Server vs Linux Server

  • Why should you start the course with notebook server now?

  • Why sometime in the future you should learn to use linux server?

23:43 Get started with Colab

  • How easy it is to start with colab?

  • What is one of the biggest problems with colab? 24:24

  • Before running anything, what to do first? 25:03

  • How to write code and run in terminal from Jupyter cell? 25:50

  • How easy it is to set up everything for this course in one cell?

  • How to connect your colab with google drive account? 26:17

  • Run the first model in colab 26:46

29:37 Github with Colab

  • How to open notebooks in colab from github?

  • How to search repositories and files to find the clean version of notebooks?

  • How the clean version is different from the original version? 30:04

30:37 Clean version of notebook

  • What exactly is this clean version for?

  • How should we use this clean version of notebooks?

31:26 Questionnaires

  • How to use questionnaires in the end of each notebook?

  • Where can you find all the answers to the questionnaires?

32:32 Share your model on your dataset

  • Where to go to share your work?

  • How positive and friendly are people on the forum for you?

34:29 Wrong ways to do fastai

  • How much math you are warned for studying DL by numerous so-called experts online?

  • How much math you actually need to study DL according to Jeremy 35:19

  • Start the 4 steps now and you will learn more deep math as needed along the way 35:56

36:41 Start positive learning feedback

  • What can you and your model do now? (mostly pretty surprisingly good than you expected)

  • What can’t you and your model do now? (so that you know where you head to)

  • Document it and see your progress, make it a positive feedback loop for yourself.

37:27 Read and Write code

  • Learn to enjoy reading codes from notebooks and fastai libraries

  • Try spend as much time read and write good deep learning codes

38:07 Immerse yourself in DL world through twitter

  • How to find out enough good stuff to read about DL on twitter?

  • How to get yourself noticed in DL world some day?

40:39 Go blogging

  • What to blog about?

  • Why should you blog?

  • How easy can it be to build your own blog with fastpages?

42:03 A great thing to blog

  • why it is such a great thing to blog about great DL videos, e.g., Jeremy’s talk on AI?

44:03 How ML differs from other coding

  • Isn’t generalization what set ML above other other forms of coding?

  • How to make sure ML models can generalize well on new datasets?

45:15 Why and How to create a good validation set

  • Where is the famous Rachel’s blog post on validation set?

  • Where to search for all fastai blog posts?

46:20 Coding DL is harder than other forms of coding

  • How easy it is to have something going wrong your DL codes?

47:16 Baseline for your project

  • Why and how to build a baseline model for your project?

  • How to build a project for failure?

  • How to build a project for success?

49:51 Kaggle competitions as best projects

  • Why should beginners do Kaggle competitions at some point during the course?

  • Why can Kaggle competitions be your best projects?

  • Can a competition test your end-to-end understanding of DL?

  • How to do Kaggle competition to benefit you in the right way?

52:33 Build your portfolio for job

  • posts in forums

  • github repos

  • Kaggle competitions

  • your own project

  • Where you are more likely to get appreciated

55:30 Get to be the firsts to do part 2

  • Who can get to be the first ones to do part 2 online?

56:08 How to get started with AWS EC2

1 Like

Thanks - I’ve added that to the video now! :smiley: (BTW, IIRC the first time-stamp has to be 00:00 for the markers to work.)

1 Like

Got it! Thanks

1 Like

Let me try to do Lecture 4.

3 Likes

Lecture 3

  • The theoretical foundations of deep learning

00:00 Introduction and survey

01:36 “Lesson 0” How to fast.ai #learning-tips

  • Where is Lesson 0 video?

  • What does it to do with the book ‘meta learning’ and fastai course?

02:25 How to do a fastai lesson? #learning-tips

  • Watch with note

  • Run the notebook and experiment

  • Reproduce the notes from the codes

  • Repeat with a different dataset

04:28 How to not self-study? #learning-tips

  • physical and virtual study group

  • study with people on forum

  • Learn with social interactions is better than self-study

05:28 Highest voted student work

  • Many interesting projects to check out

07:56 Jeremy’s Pets breeds detector

  • Jeremy’s Pets repository

  • What you should do with this App example?

08:52 Paperspace: your DL workstation in cloud! #great-tools

  • Does Jeremy speak highly of it? and Why?

10:16 JupyterLab: real beginner friendly #great-tools

  • Why JupyterLab is so good for beginners to take advantage of?

12:11 Make a better pet detector

  • After training, we should think about how to improve it

13:47 Comparison of all (image) models

  • Did anyone compared most of the image models and shared the finding?

  • Where to find the notebook for comparison?

  • Which 3 criteria are used for comparison?

15:49 Try out new models

  • How to select and try out models with high scores

  • Where is the train.ipynb file?

  • How to try models on TIMM?

  • How to compare them by loss?

  • Why this model is actually impressive?

  • What can the name of a model tell us?

  • Why Jeremy only train 3 epochs? 18:58

19:22 Get the categories of a model

  • How to get labels or categories info from the model?

  • The rest is we learnt from last lecture.

20:40 What’s in the model

  • What two things are stored in the model?

21:23 What does model architecture look like?

22:15 Parameters of a model

  • How to zoom in on a layer of a model?

  • How to check out the parameters of a layer?

  • What does a layer’s parameters look like?

23:15 The investigating questions

  • What are the weights/numbers?

  • How can they figure out something important?

  • Where is the notebook on how neuralnet work

23:36 Create a general quadratic function #best-practice

  • How to create a general function to output any specific quadratic function by changing 3 parameters?

  • How to generate result from a specific quadratic function by changing 1 parameter?

  • Why do we create such a general (quadratic) function with multiple unknown parameters rather than directly writing a particular quadratic function with specific coefficients?

27:20 Fit a function by good hands and eyes

  • What does fit a function mean? (search better parameters based on dataset)

  • How to create a random dataset?

  • How to fit a general quadratic function to the dataset by changing 3 parameters with jupyter widgets by hand?

  • What is the limitation of this manual/visual approach?

  • where is this notebook

30:58 Loss: fit a function better without good eyes #concept

  • Why do we need loss or loss function?

  • What is mean squared error?

  • How does loss help the hand/visual approach to be more accurate and robust?

33:39 Automate the search of parameters for better loss

  • How do we know which way and by how much to update parameters in order to improve on loss?

  • Can you find enough derivative material on Khanacademy?

  • What exactly do you need to know about derivative for now according to Jeremy? 34:26

  • What is the slope or gradient?

  • Does pytorch do derivative or calc slope/gradient for us?

  • How to create a function to output sme loss on a general quadratic function? 35:02

  • What do you need to know about tensor related to derivatives for now according to Jeremy? 36:02

  • How to create a rank 1 tensor (a list to store numbers) to store parameters of the quadratic function? 36:49

  • How to ask pytorch to prepare the calculation of gradients for these parameters? 37:10

  • How to actually calculate gradients for each parameter based on the loss achieved by this specific function (3 specific parameters) against the whole dataset? 37:38

  • In other words, this time when we calculate loss we can easily get the gradient for each parameter as well.

  • What does the gradient value mean for each parameter? 38:34

  • How to update parameters into new values with the gradients produced by the loss? 39:18

  • How to automate the process above to find better parameters to achieve better loss? 41:05

  • Why this automation is called gradient descent?

  • notebook

42:45 The mathematical functions #concept

  • Besides dataset, loss function, derivative, what is also very crucial in finding/calculating those parameters?

  • Why we can’t simply use quadratic functions for it?

43:18 ReLu: Rectified linear function #concept

  • Real world powerful models demands complex parameters and also complex functions, how complex a function can we come up?

  • Is it possible to come up an infinitely complex function by simply doing addition of extremely simple functions?

  • What could such extremely simple function look like?

  • What is rectified linear function? How simple it is? What is linear and which part is rectified?

  • What does rectified linear function look like in plot?

  • How to adjust the 2 parameters of the function by hand with widget?

  • What the function could look like under different parameters? 44:46

45:17 Infinitely complex function #concept

  • How powerful can the addition of extremely simple functions be?

  • How to create a double rectified linear function (double relu) and adjust 4 parameters by hand with widget?

  • How much more flexible does this double relu function look compared to a single rectified linear function?

  • Can you imagine how complex can a function be when millions of rectified linear functions are added?

47:36 2 circles to an owl

  • a very concise summarization of sewing fundamental ideas together for deep learning

49:21 A chart of all image models compared #best-practice

  • Can it be done with brute force computation with simple code?

  • Does Jeremy look for the model comparison chart for best models?

  • What is the wrong way of using the comparison chart by students? 50:45

  • How does Jeremy use the chart?

  • how does Jeremy decides which models to try out step by step?

52:11 Do I have enough data? #best-practice

  • Did you already build a model and train on your own dataset?

  • Is the result good enough for you?

  • What is the mistake the DL industry often make on this issue? 52:55

  • What is Jeremy’s suggestion?

  • How and what could semi-supervised learning and data augmentation be helpful?

  • What about labeled and unlabeled data?

54:56 Interpret gradients in unit?

  • How much does the loss go down when parameter a increase by unit of 1? 55:24

56:23 Learning rate

  • Why we don’t update parameter values in large steps?

  • Why does Jeremy draw a quadratic function to refer to the model when zooming in very close into the complex function?

  • What would happen when update parameters with large values? 57:19

  • Does large drop on loss necessarily demand large value increase of parameter according to the quadratic nature?

  • What is learning rate? Why we need it to be small? How to pick a good value of it? 58:07 #best-practice

  • What would happen if your learning rate is too big?

  • What would happen when too small?

59:45 break

1:00:14 Matrix multiplication

  • When the model requires millions of rectified linear functions, how to calculate fast enough?

  • What is actually needed from linear algebra to do DL 1:01:33

  • How easy it is to do matrix multiplication? 1:01:51

  • What are the dataset and parameters in the matrix multiplication?

  • Does matrix multiplication do the rectified part for you?

  • What GPU is good at? 1:03:49

1:04:22 Build a regression model in spreadsheet

  • Intro to Titanic Competition on Kaggle 1:05:01

  • What is the dataset 1:05:18

  • What to do with the train.csv file?

  • How to clean the dataset a little bit?

  • How to transform the dataset for matrix multiplication? 1:07:17

  • How to prepare parameters for matrix multiplication? 1:08:50

  • What’s wrong with the much larger value of the column ‘Fare’ compared to other columns? 1:09:35

  • What to do with the values of ‘Fare’ and similarly the values of ‘Age’?

  • What is normalizing the data?

  • Does fastai do all these normalizations for us? Will we learn how fastai does it in the future?

  • Why to apply log to values of ‘Fare’? 1:10:59

  • Why do we need values to be evenly distributed? #question

  • How to do mmult on dataset and parameters in spreadsheet? 1:11:56

  • How to use mmult instead of addition to add a constant?

  • What does the result of our model look like? 1:13:41

  • Does Jeremy simply use a linear regression for the model, not even a relu?

  • Can we solve regression with gradient descent? How do we do it?

1:16:18 Build a neuralnet by adding two regression models

  • What does it take to turn a regression model into a neuralnet?

  • Why we don’t add up the results of two linear functions?

  • Why we only add the results together after they are rectified?

  • What does the model prediction look like?

  • Now we need to update the parameters for two linear functions, not just one.

1:18:31 Matrix multiplication makes training faster

  • How to make the training to do mmult rather than addition of linear multiplications in spreadsheet?

1:21:01 Watch out! it’s chapter 4

  • Please do try out Titanic competition

  • Why chapter 4 drove away most of people?

  • Ways to work out the spreadsheet yourself

1:22:31 Create dummy variables of 3 classes

  • Do we only need 2 columns/classes for a dummy variable with 3 classes?

1:23:34 Taste NLP

  • What do Natural Language Processing models do?

  • What project opportunities do non-En-speaker students have?

  • What tasks can NLP do? 1:25:57

1:27:29 fastai NLP library vs Hugging Face library

  • How do these two libraries differ?

  • Why we use transformer library in this lecture?

1:28:54 Homework to prepare you for the next lesson

3 Likes