AMA Interview with Jeremy | Chai Time Data Science Podcast

Thanks so much everyone for the amazing questions.
I think for the interest of everyone (Since the idea is to make the interview informative and enjoyable for everyone), I’ll post my draft of the questions (Including credited questions from AMA) here on the 17th (1 day before the interview).

I’ll continue collecting questions till 18th Nov, 8AM PT. Everyone can decid and let me know if they would want me to skip or add any questions from the draft posted here on the 17th. :slight_smile: :tea:


Right now I am using fastai in a lot of experiments in NLP, computer vision and tabular data. I am a little hesitant to use it in production - with changing APIs and re-writes. Would “March 2020” version be production ready and will they provide commercial support?


I just want to know Jeremy’s views on this one:

1 Like

Thank you for the great initiative @init_27. Given the rise of chatbots and conversational agents, I want to know what are the methods recommended by Jeremy for doing slot extraction from user typed texts - are there any new techniques involving transfer learning being used in this field?


This is great! I have tons of questions :sweat_smile:, but I’ll state one that might be most relevant to the community:

A lot of folks out here are not from traditional Data Science or AI/ML backgrounds. However the vast majority of research positions as well as jobs still require formal training or pedigree. Does he have any thoughts on how, as a community, we can push for change? I understand that exceptional work speaks for itself, as it has for many in the community; but I’m curious if he thinks more could be done.


Thanks a lot @init_27. Can you ask for Jeremy’s opinion on unsupervised learning when it comes to time series data. There are lot of industrial equipments that can use unsupervised learning when applied to sensor data. But I haven’t noticed much progress happening in machine learning in this sector. Jeremy might have some thoughts on this

Thanks to Sanyam and Jeremy for doing this!

What are Jeremy’s thoughts on AGI?
Will start teaching Reinforcement Learning at some point?
What do I do if I don’t have ideas for interesting projects?

Question: how is your typical working day look like?

1 Like

after posting the questions here, usf website released some info about upcoming part 1 course. it’s still nice to ask J about this anyway. this is just for pre-work i guess: :slight_smile:

Great questions so far! My small contribution:

You’ve said it often takes 50 tries to get your model to work. How do you maintain the will to keep going after 49 failures?


I know I am not Jeremy but as a student trying to do this (where all that I know are Jeremy’s lessons) I experience this often when trying to do something new / an experimental architecture. I get myself into a challenge mindset to where knowing I’ve solved the problem (even if it’s two months later) is much better than not doing it at all. I do often take mental breaks if I notice the roadblock is too big, and revisit it sometimes a week or two later. But being challenge-oriented and getting fueled by it is how I’ve learned to cope with that. Hope it helps and also curious on Jeremy’s insight to this :slight_smile:

Also remembering all the little challenges I faced with the issue to remember how far I’ve come is helpful for that positive mindset


@muellerzr, thank you for that great answer! Once I get enough successes that I can “know” I will succeed eventually, that strategy will work for me. Andrew Shaw also mentioned the method of taking breaks. I also love a challenge, especially if the technique is novel.

One of the things that is a bit spooky about DL is that you can’t really know what will work with any certainty. That’s threatening but also stimulating about DL. Before, all other software projects I’ve done, the relation between code and the final outcome was obvious, and the computer only made it go faster.

1 Like

I have a few questions for now:

  1. What is your opinion on specialized hardware like TPUs for both training and inference? Do you think that this is the future of hardware for deep learning?

  2. What do you think are some fundamental research questions that still haven’t been addressed in the field of deep learning applied to computer vision?

  3. IIRC you have mentioned that there is still a lot of work that needs to be done regarding transfer learning, and that this is something the DL community has not focused on. You also had similar views regarding data augmentation. Given many recent papers focusing on training smarter, not longer, how has your views regarding the underappreciation of these fields changed?


Thought of another question:
With the incremental mprovements in optimizers (RAdam, Lookahead, Ranger, PAdam, Yogi, AdaBound, LAMB, SAdam, etc.), do you think that at one point we will replace Adam and SGD with a new optimizer as the default? Do you think it will be based on a combination of recent advances, or will it require a new way to think about optimizers?

@init_27 Good luck for the interview tomorrow! I am keen to hear how it went! Please let us all know ASAP! :slight_smile:

Also, will you be posting the questions here today? :slight_smile:

1 Like

Thanks so much Aman! I doubt if I’ll be able to sleep today :grinning:

Yes, I’m working on the questions rn. :slight_smile: will post them in a few hours.


Haha! I am having some sleep issues too cause I am so excited so I can only imagine what you must be going through!

You deserve this, go in there and I am sure all of us will have one of the greatest interviews/podcasts to listen to!! :slight_smile:

Hi Everyone!
I really wanted to thank everyone for the great questions and a huge thanks to Jeremy for doing the interview ofcourse :slight_smile:

Editing-similar to any task that runs on GPU might take me a small while. But I’ll post here as soon as the episode is released. Thanks for all the amazing support :smiley:


@init_27 how’s the editing going?..

Hi Jeremy, it’s all set. I’m releasing the vid + audio today. (Sun-9AM PT)