Live stream of Leslie Smith interview tonight

Good news everyone - my interview tonight with research Leslie Smith will be streamed live at 6.30pm pacific time, and also recorded.

You can use the Youtube Live comments to ask questions during the discussion.

9 Likes

Sorry new URL just posted: https://youtu.be/fvMfaVLYIzw

2 Likes
9 Likes

Hi Jeremy, can you explain how you used pretrained networks on 3d lungs scans? Thanks :slight_smile:

I did a quick write-up of my notes during the interview and posted it on Medium: https://medium.com/@tank671/notes-from-the-conversation-between-jeremy-howard-and-leslie-smith-7ad7ce732c4b

Comments welcome! It’s the first piece of writing I’ve put out on the internet, so it was posted amidst a lot of blogging anxiety. :zipper_mouth_face:

6 Likes

Kudos that you get over it :+1:

In case anyone missed it, James Dellinger wrote about the highlights of this interview and see what Leslie has to say:

If you are on Twitter, I recommend you tag Leslie so he is aware of your blog post.


Finally, I have a chance to revisit this interview and the following are my short notes:

I watched the interview through the live stream and got a chance to ask questions and answered! :slightly_smiling_face:

Hiromi: Thank you so much for broadcasting this. Opportunities like this are hard to come by for some of us.

Abhishek Sharma​: If there is one researcher everyone knows in FastAI community, it’s Leslie Smith.

:heavy_plus_sign:100

Comments and questions that I asked:

  1. Regarding Leslie’s few shots learning research direction:
    Some of us did a literature search around this “incremental learning” problem. The goal is to be able to learn as you go - online learning of new classes. In Leslie’s idea, I think we have yet to see anything where the same classes are used in each epoch but with stage-wise increase in batch sizes. I believe Jeremy’s idea is related to curriculum learning. We couldn’t find work related to class-incremental learning (adding new classes as we run more batches).
  2. The “Accurate, Large Minibatch SGD: Training Imagenet in 1 hour” paper from Priya Goyal, et al. (FAIR) is full with little tricks. What’s your favorites? What do you think something that we should know but many are still not aware of, but really makes training easier?

Papers added to my arXiv Sanity reading list:

  1. Minnorm training: an algorithm for training over-parameterized deep neural networks by Yamini Bansal et al.

That’s all for now. Happy Thanksgiving!

2 Likes

Congrats on getting it out there! :smiley: I think it’s a great summary, and I also appreciated the personal notes.

3 Likes

Thank you for the very interesting interview! :smiley:

I compiled a short list of the mentioned tricks:

If I missed another good trick or misinterpreted something please reply! :slight_smile:

Kind regards
Michael

4 Likes