AMA Interview with Jeremy | Chai Time Data Science Podcast

Right now I am using fastai in a lot of experiments in NLP, computer vision and tabular data. I am a little hesitant to use it in production - with changing APIs and re-writes. Would “March 2020” version be production ready and will they provide commercial support?

Awesome!!

I just want to know Jeremy’s views on this one: https://www.technologyreview.com/s/613630/training-a-single-ai-model-can-emit-as-much-carbon-as-five-cars-in-their-lifetimes/

1 Like

Thank you for the great initiative @init_27. Given the rise of chatbots and conversational agents, I want to know what are the methods recommended by Jeremy for doing slot extraction from user typed texts - are there any new techniques involving transfer learning being used in this field?

Thanks

This is great! I have tons of questions :sweat_smile:, but I’ll state one that might be most relevant to the fast.ai community:

A lot of folks out here are not from traditional Data Science or AI/ML backgrounds. However the vast majority of research positions as well as jobs still require formal training or pedigree. Does he have any thoughts on how, as a community, we can push for change? I understand that exceptional work speaks for itself, as it has for many in the community; but I’m curious if he thinks more could be done.

2 Likes

Thanks a lot @init_27. Can you ask for Jeremy’s opinion on unsupervised learning when it comes to time series data. There are lot of industrial equipments that can use unsupervised learning when applied to sensor data. But I haven’t noticed much progress happening in machine learning in this sector. Jeremy might have some thoughts on this

Thanks to Sanyam and Jeremy for doing this!

What are Jeremy’s thoughts on AGI?
Will fast.ai start teaching Reinforcement Learning at some point?
What do I do if I don’t have ideas for interesting projects?

Question: how is your typical working day look like?

1 Like

after posting the questions here, usf website released some info about upcoming part 1 course. it’s still nice to ask J about this anyway. this is just for pre-work i guess: https://www.usfca.edu/data-institute/certificates/deep-learning-part-one :slight_smile:

Great questions so far! My small contribution:

You’ve said it often takes 50 tries to get your model to work. How do you maintain the will to keep going after 49 failures?

3 Likes

I know I am not Jeremy but as a student trying to do this (where all that I know are Jeremy’s lessons) I experience this often when trying to do something new / an experimental architecture. I get myself into a challenge mindset to where knowing I’ve solved the problem (even if it’s two months later) is much better than not doing it at all. I do often take mental breaks if I notice the roadblock is too big, and revisit it sometimes a week or two later. But being challenge-oriented and getting fueled by it is how I’ve learned to cope with that. Hope it helps and also curious on Jeremy’s insight to this :slight_smile:

Also remembering all the little challenges I faced with the issue to remember how far I’ve come is helpful for that positive mindset

3 Likes

@muellerzr, thank you for that great answer! Once I get enough successes that I can “know” I will succeed eventually, that strategy will work for me. Andrew Shaw also mentioned the method of taking breaks. I also love a challenge, especially if the technique is novel.

One of the things that is a bit spooky about DL is that you can’t really know what will work with any certainty. That’s threatening but also stimulating about DL. Before, all other software projects I’ve done, the relation between code and the final outcome was obvious, and the computer only made it go faster.

1 Like

I have a few questions for now:

  1. What is your opinion on specialized hardware like TPUs for both training and inference? Do you think that this is the future of hardware for deep learning?

  2. What do you think are some fundamental research questions that still haven’t been addressed in the field of deep learning applied to computer vision?

  3. IIRC you have mentioned that there is still a lot of work that needs to be done regarding transfer learning, and that this is something the DL community has not focused on. You also had similar views regarding data augmentation. Given many recent papers focusing on training smarter, not longer, how has your views regarding the underappreciation of these fields changed?

3 Likes

Thought of another question:
With the incremental mprovements in optimizers (RAdam, Lookahead, Ranger, PAdam, Yogi, AdaBound, LAMB, SAdam, etc.), do you think that at one point we will replace Adam and SGD with a new optimizer as the default? Do you think it will be based on a combination of recent advances, or will it require a new way to think about optimizers?

@init_27 Good luck for the interview tomorrow! I am keen to hear how it went! Please let us all know ASAP! :slight_smile:

Also, will you be posting the questions here today? :slight_smile:

1 Like

Thanks so much Aman! I doubt if I’ll be able to sleep today :grinning:

Yes, I’m working on the questions rn. :slight_smile: will post them in a few hours.

3 Likes

Haha! I am having some sleep issues too cause I am so excited so I can only imagine what you must be going through!

You deserve this, go in there and I am sure all of us will have one of the greatest interviews/podcasts to listen to!! :slight_smile:

Hi Everyone!
I really wanted to thank everyone for the great questions and a huge thanks to Jeremy for doing the interview ofcourse :slight_smile:

Editing-similar to any task that runs on GPU might take me a small while. But I’ll post here as soon as the episode is released. Thanks for all the amazing support :smiley:

10 Likes

@init_27 how’s the editing going?..

Hi Jeremy, it’s all set. I’m releasing the vid + audio today. (Sun-9AM PT)

7 Likes

Thanks so much Everyone for the huge support! And of course a huge thanks to Jeremy.

I think this is a dream that has been accomplished with the interview series, so I’m still pinching myself and really excited to finally publish the interview.

I’ve been sitting on the interview for a few days now since I wanted to release it as my 27th interview. I think everyone here will recognise why the number is so dear to me :slight_smile:

I need to point out that in the interest of time, as much as I wanted, I skipped a few questions along with a few ones that weren’t really suitable to be asked on the podcast.

Even still it’s really a dream completed by me, Thanks to again the fast.ai family where my interview series was so nicely received that I somehow managed to continue doing it, I owe it to you all. (The next dream would be to take the course in person and meet all of my amazing friends from the forums in person, but we’ll see :slight_smile: )

Here are the interview links:

Audio: https://anchor.fm/chaitimedatascience/episodes/Interview-with-Jeremy-Howard--fast-ai--Kaggle--Machine-Learning-Research-e9ddbh

Notes: https://sanyambhutani.com/interview-with-jeremy-howard/

Video: https://www.youtube.com/watch?v=205j37G1cxw&feature=emb_title

18 Likes