๐Ÿ“ Deep Learning Lesson 2 Notes

(Cedric Chee) #5

Hi Poonam, thank you again for starting this. While we wait for Jeremy to turn this post into wiki, could you please add the common header section to your notes and link back to lesson 1 notes?

I have linked to this notes from lesson 1 notes and from the โ€œlesson 2 official resources and updatesโ€. Thanks.

0 Likes

(Poonam Ligade) #6

Thanks Cedric. will surely do that. :slight_smile:

0 Likes

(Andrea de Luca) #7

Great notes, thank you fr sharing! :slight_smile:

1 Like

(Vikas) #8

Pasting the link here so that if thread is converted into wiki, this is not lost:

1 Like

(Jeremy Howard (Admin)) #9

Done.

1 Like

(Robert Salita) #10

If you find that the javascript code, which collects urls into a download file, isnโ€™t producing the file or the file is zero length, you may have an ad blocker or popup blocker thatโ€™s interfering. Turn off the blocker, refresh the page and try the javascript again. In my case, uBlockOrigin was causing a problem.

Also the javascript doesnโ€™t seem to work on Windows in Microsoft Edge, Firefox, or a private/incognito window. Can anyone verify? It only works in Chrome.

1 Like

(Lankinen) #11

I got my personal notes ready and as soon as I have time I will copy some parts from there to here. If someone is interested to read my notes and finds something interesting, with my permission you can just copy past it to these notes. And again if there is too much information to be shared publicly reply this message and I will delete that post. Thanks again to Jeremy this amazing lesson.

1 Like

(Cedric Chee) #12

Detailed lesson 2 notes by Hiromi.

2 Likes

#13

While picking a learning rate, when do we pick 3e-5 vs 1e-5 ? There was a part of the video in lesson 2 where Jeremy saw the image and put 3e-5 and I wasnโ€™t sure when 1e-5 would be used against 3e-5

1 Like

(Ashwani Tiwari) #14

After download the url on the local , I have to uplaod that url on the kaggle . The directory created by the python code is โ€œdata/beers/โ€ฆโ€ , then how to upload the file from local to this directory location , because from the โ€œAdd Datasetโ€ option on the kaggel , dosent show me this location. Please help me .

0 Likes

(Christo MIrchev) #15

Here is the javascript to download the links In case you are using DuckDuckGo instead of Google to search for images:
urls = Array.from(document.querySelectorAll('img.tile--img__img.js-lazyload')).map(el=>decodeURIComponent(el.src.split('=')[1])); window.open('data:text/csv;charset=utf-8,' + escape(urls.join('\n')));

1 Like

(matej) #16

I created SGD for square functions. I hope this is the right place to post it
https://www.kaggle.com/matejthetree/sgd-2?scriptVersionId=15140809
I am bad with plot, so I dont know why it pops two images at the end :slight_smile:

0 Likes

(Michael Li) #17

I manged to download some images of different Chinese Calligraphy artworks. Plan is to build a Chinese calligraphy style classifier. I did all the work on a Kaggle Kernel and uploaded images as private dataset to be used by the kernel. My question may be quite dumb, but what kind of license should I use for the dataset? Or should I upload to Kaggle at all without breaking some license? I want to share the notebook on Kaggle once I finished, but if license wonโ€™t clear, I donโ€™t think I can do that. Anyone here could shed some lights on this topic? Much appreciated!

0 Likes

#18

Does anyone know why my epochs always start from zero instead of one (when I run learn.fit_one_cycle)?
Also why Jeremy used max_lr slice (3e-5, 3e-4) instead of (1e-5, 1e-4) (after running learn.recorder.plot())?

0 Likes

(Amit JS) #19

Can Some one please turn this into a set of equations, Especially the

loss.backward()
 with torch.no_grad():
        a.sub_(lr * a.grad)
        a.grad.zero_()

Iโ€™m having a hard time understanding it.

1 Like

(Farid Hassainia) #21

a : is the weight tensor (that also stores its gradient in a.grad ) that our model will determine during the training of our model (y = x @ a)
a.grad is an attribute of the a tensor where the gradient of a is stored

a.grad is calculated after each call of the loss.backward() function
Then a is update like this
a = a - lr * a.grad
which can be written like this
a -= lr * a.grad
And in pytorch, it is written like this
a.sub_(lr*a.grad)
itโ€™s called in-place sub() because it directly updates the a tensor in-place

(by the way, if you see a function that ends with _ like sub_(), it means itโ€™s an in-place version of its correspondent function (like add and add_) : Itโ€™s a convention)

Once we finish updating the a tensor, we have to reset a.grad to zero ( a.grad.zero_() ) before calling the next loss.backward() function.

As for the with torch.no_grad(): , we use it to ask pytorch to stop updating (tracking) a.grad (itโ€™s already calculated after the loos.backward() call) while we are updating the a tensor

1 Like

(Vatsal) #23

Thanks a lot, i spent like 30 mins trying to figure out why i couldnt download. Appreciate it!

0 Likes

(Max) #24

Hi all,

I have one quick question regarding the update() function. Been trying to wrap my head around this concept.

Question:

How do we know that if we move the whole thing downwards, the loss goes up and vice versa?

Appreciate any insights. Thank you.

0 Likes

#25

I believe the learned coefficients in the linear recession example will tend to (3,2.5).

0 Likes

(Milad Dakka) #26

So, I am using Mozilla Firefox, and it seems I can download the links using the code in the tutorial, but am I supposed to save the file in a particular directory? Or is there a different javascript for Firefox to download the links?

0 Likes