Wiki: Lesson 1

Fastai Part 1 v2 Lesson 1 Timelines, done.
@jeremy @hiromi

Video timelines for Lesson 1

  • 00:00:01 Welcome to Part 1, Version 2 of “Practical Deep Learning for Coders”,
    Check the Fastai community for help on setting up your system on “

  • 00:02:11 The “Top-Down” approach to study, vs the “Bottom-Up”,
    Why you want a nVidia GPU (Graphic Processing Unit = a video card) for Deep Learning

  • 00:04:11 Use if you don’t have a PC with a GPU.

  • 00:06:11 Use instead of, for faster and cheaper GPU computing. Technical hints to make it work with a Jupyter Notebook.

  • 00:12:30 Start with Jupyter Notebook lesson1.ipynb ‘Dogs vs Cats’

  • 00:20:20 Our first model: quick start.
    Running our first Deep Learning model with the ‘resnet34’ architecture, epoch, accuracy on validation set.

  • 00:24:11 “Analyzing results: looking at pictures” in lesson1.ipynb

  • 00:30:45 Revisiting Jeremy & Rachel’s approach of “Top-Down vs Bottom-Up” teaching philosophy, in details.

  • 00:33:45 Explaining the “Course Structure” of Fastai, with a slide showing its 8 steps.
    Looking at Computer Vision, then Structured Data (or Time Series) with the Kaggle Rossmann Grocery Sales competition, then NLP (Natural Language Processing), then Collaborative Filtering for Recommendation Systems, then Computer Vision again with ResNet.

  • 00:44:11 What is Deep Learning ? A kind of Machine Learning.

  • 00:49:11 The Universal Approximation Theorem, and examples used by Google corporation.

  • 00:58:11 More examples using Deep Learning, as shown in the PowerPoint from Jeremy course in ML1 (Machine Learning 1)
    What is actually going on in a Deep Learning model, with convolutional network.

  • 01:02:11 Adding a Non-Linear Layer to our model, sigmoid or ReLu (rectified linear unit), SGD (Stochastic Gradient Descent)

  • 01:08:20 A paper on “Visualizing and Understanding Convolutional Networks”, implementation on ‘lesson1.ipynb’, ‘cyclical learning rates’ with Fastai library as “lr_find” or learning rate finder.
    Why it starts training a model but stops before 100%: use Learner Schedule Finder.

  • 01:21:30 Why you need to use Numpy and Pandas libraries with Jupyter Notebook: hit ‘TAB’ for more info, or “Shift-TAB” once or twice or thrice (three times) to bring up the documentation for the code.
    Enter ‘?’ before the function, or ‘??’ to look at the code in details.

  • 01:24:40 Using the ‘H’ shortcut in Jupyter Notebook, to see the Keyboard Shortcuts.

  • 01:25:40 Don’t forget to turn off your session in Crestle or Paperspace, or you end up being charged.


Yay! Perfect timing :slight_smile:

ImportError Traceback (most recent call last)
in ()
----> 1 import utils; reload(utils)
2 from utils import plots

D:\\courses-master\deeplearning1\nbs\ in ()
16 from numpy import newaxis
17 import scipy
—> 18 from scipy import misc, ndimage
19 from scipy.ndimage.interpolation import zoom
20 from scipy.ndimage import imread

~\Anaconda3\envs\kj\lib\site-packages\scipy\ in ()
51 from .common import *
52 from numpy import who, source, info as _info
—> 53 from scipy.interpolate._pade import pade
54 from scipy.special import comb, factorial, factorial2, factorialk, logsumexp

~\Anaconda3\envs\kj\lib\site-packages\scipy\ in ()
174 from future import division, print_function, absolute_import
–> 176 from .interpolate import *
177 from .fitpack import *

~\Anaconda3\envs\kj\lib\site-packages\scipy\interpolate\ in ()
20 import scipy.linalg
—> 21 import scipy.special as spec
22 from scipy.special import comb

~\Anaconda3\envs\kj\lib\site-packages\scipy\ in ()
638 from .sf_error import SpecialFunctionWarning, SpecialFunctionError
–> 640 from ._ufuncs import *
642 from .basic import *

ImportError: DLL load failed: The specified module could not be found.

@jkashish18 that’s not enough information for anyone to be able to help you. In fact, spending no time at all telling us what you’re doing, what you’ve tried, etc makes it look like you’re investing far less time on your problem than you’re asking other people to spend helping you. You may find these tips useful:


Just wondering, I noticed that there was more content in the Lesson 1 IPython Notebook that was not covered in the video like Finetuning, analyzing results, the confusion matrix, etc. Are we expected to cover this on our own or will this be covered later?


I started the DL part 1 a few weeks ago on the 2017 version based on Theano. I had the set-up running well on a P2 instance on AWS! I am now excited to move to Pytorch to continue this class and I am restarting from the beginning with lessons 1 & 2.

I have tried running the notebook for lesson 1, and I have noticed that the performance seems to be lower than when using Theano/Keras. I first thought this was because the model is more complex, but running the notebook with VGG16 for the updated class still gives me a performance approximately 50% lower than with Theano

Pytorch + wrapper, VGG16

nvidia-smi results show only 1GB/11GB used

Theano + Keras, VGG16
Comeples in 7min vs 14min for Pytorch

They are all running on the same machine (P2 instance). Is this kind of performance drop expected?
I have installed everything myself, running CUDA8 / CuDNN 6

I ran the code until the end, and I may be confused because it seems to behave differently to Keras which I am used to. While the first phase is very slow, the second phase runs very phase when it goes through the epochs: total_time

The GPU memory allocated remains very low through the whole process (1GB/11GB). What does the preliminary step do? Does it pre-process the images? This would be I believe very different from the eras approach that pre-processes batch by batch and therefore gives slower epochs but no pre-processing time.

Or is my configuration too slow for a typical P2 instance?

Many thanks!

1 Like

It will be covered later.

The first phase is precomputing the activations. Once that’s done, it’s just training the last layer, which is fast.

Hi @jkashish18 ,
I think you could try to uninstall and reinstall the scipy library.
Indeed, during different apps installation, the dependencies could lead some libraries to work badly.
And a re-installation often solves the problem.

Hi, I was working through the original version of the course. And now see the 2018 version posted. I’m finishing up lesson 3 in the original course. Would you guys recommend I switch to the new course as to learn the latest techniques. Or finish up the old course first?

I’m at the exact same stage as you are and just learned about the 2018 version with Pytorch. I’m starting over from Lesson 1 since I expect to be able to catch up in no time since the core concepts are already familiar. And in order to truly learn something it is good to keep re-learning :slight_smile:

Also at the pace the entire field is progressing I suggest to be very quick in changing learning direction when new technologies in the field of ML emerge.

1 Like

So I start the course a few weeks back… and landed watching the first set of videos where Lesson1/2/3 + notes were using VGG16… decided to get paperspace going and to my surprise Lesson one’s content is now using resnet and the vgg and utils libs I was going over are very different. Then checked the site and realise the vids have also been updated. haha.

Do I just restart from scratch or is the content pretty much the same just the algo used are just more state of the art…

Like vgg16 was used in lesson 1 and 2 in the old vids but now which was helping me with my understanding but now for 2018 we’re using resnet.

The main difference is probably the switch from Keras+Theano to PyTorch and the new fastai library on top of it.
The new version uses state of the art models and techniques, but it also offers more interesting tips and tricks.
I watched the first 3 videos of the previous version of the course before going through the new version (I attended the course offline at USF). I would recommend using the new version of the course.

1 Like

You’re welcome to continue the old course at

1 Like

Can you define from and to here, in the create sample folder function?

I encountered the same problem. The fix is to run “jupyter nbextension enable --py widgetsnbextension --sys-prefix” as explained in this installation guide.

In Lesson1 Jupyter Notebook. With regard to the section of “Choosing a learning rate”, I think there is an error in this sentence - “where we simply keep increasing the learning rate from a very small value, until the loss starts decreasing”. It should be ‘increasing’ not ‘decreasing’? @jeremy


A post with one of Favorita’s 1st Place kernels using Keras:

1 Like