Another treat! Early access to Intro To Machine Learning videos


(Aditya) #291

Thanks a lot for the Tutorial Videos on Machine Learning…
Haven’t watched all(@8) but they are really cool…

Will watch them this Christmas all together as a stack atleast twice to make sense…


(Vijay Narayanan Parakimeethal) #292

Absolutely great! I have just started watching them now. Will be keen to complete this and get started with Part 2 version of both DL and ML this coming February / March 2018. Hopeful that the international fellowships will still be available for part 2 of DL.


(Miguel Perez Michaus) #293

The ethics part is so important. In my experience it all usually boils down to finding the inner strength to fight and do the correct thing. Unfortunately this search of inner strength is made at unconscious level and if fear/group bias wins the battle… then intelectual questioning will not even happen.

Anyway I like to keep in mind Albert Camus quote in “The Plague”: “There are more things to admire in humans than to despise.”

Thank you @jeremy for all this and for making evident that “worth being admired” part of the world!


(Jeremy Howard) #294

Absolutely!


(sergii makarevych) #295

Thanks Jeremy for these amazing courses. They completely blowed my mind and attitude to ML/DL tasks at work and competitions at Kaggle. And its not even about pytorch at the first place, its about making things interesting and having fun at everything you do. Thanks master. :man_student:


(Nimish Sule) #296

Just started this today and this is just great! I just regret not starting this earlier. Would have solved many of the questions I had to search up a lot earlier. This is best done in parallel with the deeplearning course. Thank you so much.


(Jeremy Howard) #297

Thanks for the feedback - I’d be interested to hear when you’re done, what you think a good recommended order of lessons is. It would be nice for us to create some kind of “lesson plan” for students.


(Karthik Ramesh) #298

@jeremy.Thank you for one of the best courses ever.
I have been lucky to have some the best teachers in the world over the years and you are right up there amongst them.
You have helped shape my entire outlook to ML/DL , i see it in a totally different light now, and the best part… You made it so much fun!
Thank you :slight_smile:


#299

Amazing material!
Everything is usefull, from background stuff as AWS, TMUX, VIM and Jupyter Notebooks to Fastai library.
@jeremy Thank you very much.


(Tuatini GODARD) #300

Just as a little feedback I found lesson 11 to be a bit more difficult than all the rest. When you explain the maths behind what you did for text classification. Maybe that’s only me but I found as if we went a step ahead into the theoretical aspect of why it works where all the other lessons were, imo more “practical and straightforward”. Hope it helps, somehow :slight_smile:


(Nimish Sule) #301

Sure :+1: . It would be my pleasure to contribute to the lesson plan.


(Jeremy Howard) #302

Yes that makes sense. We can work on creating written materials to accompany this that make it more accessible…


(Alex) #303

Thanks @jeremy. That were absolutely amazing lessons. I couldn’t even imagine - to get this treat when started.
For me these lessons turned to be essential complimentary part to DL lessons to understand it from top to bottom and vice versa.

I was following most of them same weeks they have been released and also believe they are best to go in parallel with DL lessons.


(Aditya) #304

For pandas plotting here is the documentation,(quite good)

https://pandas.pydata.org/pandas-docs/stable/visualization.html


(Aditya) #305

`

  • Can someone explain the idea as to why on creating a suitable validation set and plotting the test set and validation set’s scores will lie on/around a straight line and if the validation set is bad it will be away from the line…
    Couldn’t understand this idea/concept…
  • Also Can someone refer to good resource to interpret univariate relationship just like Jeremy did using ggplot (lec -4)?(in my understanding we can only tell whether something is going down or up internship of value based on one other and might be effected by other correlated features/columns)
  • Is it worth looking at the image dataset (let’s say from…medical imaging) even though I don’t have any knowledge about the medical field?Do we have something similar to what Jeremy did in The Machine Learning (Lec 4) in Deep Too??Because building a efficient model using the fast.ai is very easy(thanks for that), but how do we do analysis in case of Images?

`
Thanks in advance…


#306

You want your validation set to be reflective of the data your model will run on in production (test set in case of Kaggle comps) so that when you monitor your progress during training, you can gauge if your model is learning things that are applicable to scenarios outside your train set and ideally are such as you would encounter in production.

Rachel wrote a great post on this that I link to below, that goes into many very useful details not talked to anywhere else that I am aware
http://www.fast.ai/2017/11/13/validation-sets/


(Aditya) #307

Are the applications for fast.ai version 2 available now?
@jeremy(sorry)

http://www.fast.ai/2018/01/02/diversity-2018/

Edit - It’s not for International fellows…


(Aditya) #308

Thanks for your response Radek.
But can you shed some light on the 3rd bullet…
Thanks in Advance…


#309

With regards to analyzing predictions of conv nets, there are some nice examples of how to go about this in the lesson 1 notebook (where we look at the images that our algorithm struggles with, the confusion matrix, etc). I also remember seeing a nice visual analysis somewhere of model performance done by @sermakarevich for one of the kaggle comps I think

Ah here it is: Kaggle Comp: Plant Seedlings Classification

There is also the really cool vgg cam sort of thing that I believe is covered in the last lecture of p1 v2 but I do not know for sure as I have not gotten to that lecture yet (a similar approach is outlined in the last lecture of p1 v1 and it is quite impressive to be able to do this!).

In general, there are many things one can do to understand the performance of conv nets, however the techniques that Jeremy outlines specific to random forests will not translate 1-1 to analyzing CNNs.

Not sure if this answers your question though - if not, please let me know.


(Aditya) #311

What’s this metric AUC – ROC?

I tried to read few posts but couldn’t understand it properly…

It’s a hackathon competition, so need the answer as fast as possible .
Thanks…