AMA Interview with Jeremy | Chai Time Data Science Podcast

Great questions so far! My small contribution:

You’ve said it often takes 50 tries to get your model to work. How do you maintain the will to keep going after 49 failures?

3 Likes

I know I am not Jeremy but as a student trying to do this (where all that I know are Jeremy’s lessons) I experience this often when trying to do something new / an experimental architecture. I get myself into a challenge mindset to where knowing I’ve solved the problem (even if it’s two months later) is much better than not doing it at all. I do often take mental breaks if I notice the roadblock is too big, and revisit it sometimes a week or two later. But being challenge-oriented and getting fueled by it is how I’ve learned to cope with that. Hope it helps and also curious on Jeremy’s insight to this :slight_smile:

Also remembering all the little challenges I faced with the issue to remember how far I’ve come is helpful for that positive mindset

3 Likes

@muellerzr, thank you for that great answer! Once I get enough successes that I can “know” I will succeed eventually, that strategy will work for me. Andrew Shaw also mentioned the method of taking breaks. I also love a challenge, especially if the technique is novel.

One of the things that is a bit spooky about DL is that you can’t really know what will work with any certainty. That’s threatening but also stimulating about DL. Before, all other software projects I’ve done, the relation between code and the final outcome was obvious, and the computer only made it go faster.

1 Like

I have a few questions for now:

  1. What is your opinion on specialized hardware like TPUs for both training and inference? Do you think that this is the future of hardware for deep learning?

  2. What do you think are some fundamental research questions that still haven’t been addressed in the field of deep learning applied to computer vision?

  3. IIRC you have mentioned that there is still a lot of work that needs to be done regarding transfer learning, and that this is something the DL community has not focused on. You also had similar views regarding data augmentation. Given many recent papers focusing on training smarter, not longer, how has your views regarding the underappreciation of these fields changed?

3 Likes

Thought of another question:
With the incremental mprovements in optimizers (RAdam, Lookahead, Ranger, PAdam, Yogi, AdaBound, LAMB, SAdam, etc.), do you think that at one point we will replace Adam and SGD with a new optimizer as the default? Do you think it will be based on a combination of recent advances, or will it require a new way to think about optimizers?

@init_27 Good luck for the interview tomorrow! I am keen to hear how it went! Please let us all know ASAP! :slight_smile:

Also, will you be posting the questions here today? :slight_smile:

1 Like

Thanks so much Aman! I doubt if I’ll be able to sleep today :grinning:

Yes, I’m working on the questions rn. :slight_smile: will post them in a few hours.

3 Likes

Haha! I am having some sleep issues too cause I am so excited so I can only imagine what you must be going through!

You deserve this, go in there and I am sure all of us will have one of the greatest interviews/podcasts to listen to!! :slight_smile:

Hi Everyone!
I really wanted to thank everyone for the great questions and a huge thanks to Jeremy for doing the interview ofcourse :slight_smile:

Editing-similar to any task that runs on GPU might take me a small while. But I’ll post here as soon as the episode is released. Thanks for all the amazing support :smiley:

10 Likes

@init_27 how’s the editing going?..

Hi Jeremy, it’s all set. I’m releasing the vid + audio today. (Sun-9AM PT)

7 Likes

Thanks so much Everyone for the huge support! And of course a huge thanks to Jeremy.

I think this is a dream that has been accomplished with the interview series, so I’m still pinching myself and really excited to finally publish the interview.

I’ve been sitting on the interview for a few days now since I wanted to release it as my 27th interview. I think everyone here will recognise why the number is so dear to me :slight_smile:

I need to point out that in the interest of time, as much as I wanted, I skipped a few questions along with a few ones that weren’t really suitable to be asked on the podcast.

Even still it’s really a dream completed by me, Thanks to again the fast.ai family where my interview series was so nicely received that I somehow managed to continue doing it, I owe it to you all. (The next dream would be to take the course in person and meet all of my amazing friends from the forums in person, but we’ll see :slight_smile: )

Here are the interview links:

Audio: https://anchor.fm/chaitimedatascience/episodes/Interview-with-Jeremy-Howard--fast-ai--Kaggle--Machine-Learning-Research-e9ddbh

Notes: https://sanyambhutani.com/interview-with-jeremy-howard/

Video: https://www.youtube.com/watch?v=205j37G1cxw&feature=emb_title

18 Likes

Great interview! I fell left out, I know 27 is in your user name, but other than that what is the significance of 27.

2 Likes

Thank you!
It’s my username and my birthday :slight_smile:

3 Likes

I also need to ask a question to everyone: As you might know, this is really a passion project for me (The interview series).

So I’d be really thankful for any of your criticism or feedback-I hope to continue doing these interviews for a little while further so I rather they be better and more enjoyable (Not implying that I don’t try my best to make them so, but I do know that I can always keep improving) :slight_smile:

Thanks in advance!

2 Likes

Yes I have a feedback! I hope the transcript-like note of the interview will be provided. It’s really necessary for people who aren’t good at listening, for example because they’re not native speakers like me.

1 Like

Hi Sanyam, I thought you did an outstanding job with the interview! I enjoyed that. For me, a good interviewer is someone who knows enough to ask a good question and then gets out the way to let the interviewee respond. That is exactly what you did.

Jeremy has a fantastic genesis story and I think I would have been even more likely to sign up for the course had I heard that first.

The only thing I can think of that you could do to make the interview better is to record in person. I realise that’s often not practical, but if you do get the chance to record in person, take it.

1 Like

Hey @dsfsffefsdfdfdsdfd!
Thanks for the suggestion. The first 2 episodes on the series have now subtitles that have been completely fixed and uploaded.

You can expect some blog activity starting soon with all of it getting in sync in 1-2 months (Blogs will eventually-I hope-be released in sync with the videos).

Hope that helps with the watching experience. :tea:

2 Likes

Thanks so much for the kind words, Chris!

Yes, I think we can speak for the community that we all are fans of Jeremy’s genius :slight_smile: I really respect his honest sharing about Kaggle even for him was intimidating to start with.

I’d love to meet all of the amazing people I’ve interviewed in person. Right now, I’ll continue doing it from Chennai but I really wish to do a few interviews in person someday.
Thanks for listening :slight_smile:

1 Like

Yes, it’s very good interview. I liked it more than the one I have seen with Lex Friedman. Great information to receive. I’m also very excited to meet @init_27 and all other super beings in the upcoming course at San Francisco in person.

1 Like