Ask Jeremy anything

(Xu Fei) #62

Will implement a version of the AutoML method in the future?

(Bharadwaj Srigiriraju) #63

here’s something interesting: Train DNN to play Atari on a personal computer

(Gerardo Garcia) #64

When @Rachel and/or @Jeremy coming to Houston?
I would like to invite you to our Data Science Meetup.

(Sanyam Bhutani) #65

Best advice for a 20-year-old undergrad aspiring to be a DL Researcher?

(rkj) #66

Are there any projects you need help with?

(Sarada Lee) #67

Just wonder how quick can you train on Imagenet and CIFAR10 on Google TPU? :thinking:

(Mirodil) #68


What is difference between the Cyclical Learning Rate and the Stochastic Gradient Descent Restarts if both of them tries to adjust learning rate? When use one over another one?
Thank you.

(Nick) #69

I see that there is a new repo containing notebooks which cover some of the library concepts from scratch. Is it for an upcoming course? :slight_smile:

(Junxian) #70

I just checked the link, quite interesting! Maybe you should start a new thread, so it becomes more visible. Maybe we can all help.


(Nick) #71

@sgugger could you tell us, please?

(Ronaldo da Silva Alves Batista) #72

I think it is. In the fastai github thread, someone from Jeremy’s team mentioned that they are planning a new version of the Course starting next October with a new library rewritten from scratch with a change in the API. I don’t know the details but as I remember they are still using pytorch.


It’s the beginning of the new version of the library yes, though the very early stage for now.
I think Jeremy will create a topic dedicated to this once we have something a bit more consistent to show, to ask for feedback and help.

(Nick) #74


(Jeremy Howard) #75

There’s a forum for it: #fastai-dev

(Jason Antic) #76

What are the future plans for and this course? Will there be a Part 3 :smiley: ?

If there is a part 3 I would really really love to take it.

I just saw this question being addressed on the lesson 14 video so I just wanted to chime in with enthusiastic support for more! @jeremy, @rachel - Just in case you haven’t heard it enough yet.

Though, perhaps, this doesn’t necessarily have to be literally in the same format or even as a class at all. Part 2 is very different from part 1 in that it’s leaving a lot more up to the students and seems to be more of a “teach how to fish” approach (read the papers!).

What has occurred to me is that this community is the perfect platform to run with new papers, discuss them, and get them implemented. There’s talented students, there’s an established forum, and there’s a great code base with actual users along with great leadership. But Jeremy and Rachel can’t do it all. And there are already great contributors doing this. It seems rather informal though at this point.

I wonder if there’s any benefit to having a dedicated space that tackles new papers (and maybe overlooked old ones) on an ongoing basis, and out of that perhaps some of it makes it to the codebase if it proves to be useful enough. To me this would be a great “continuing education” (a part 3, essentially), and would almost certainly be generally beneficial to the community in many ways. I was planning on going out on my own and trying to implement papers in isolation with a blog to go along with it after I got done with part 1 and part 2, but it seems to be it’d be much more effective to have a dedicated effort here.

(Jeremy Howard) #77

@jsa169 the #deep-learning forum is a great place to have this continuing discussion. Or you can join the #fastai-dev effort, which naturally incorporates a lot of research etc. Happy to hear other specific suggestions, or feel free to try things yourself that you think might help.