Ask Jeremy anything

(Zachary Thomas) #42

This is a great deep learning course! How about a data engineering course?

(Aseem Bansal) #43

Yes. Unless the data is programmtically generated where the chances of having bad data is low.

(Wayne Nixalo) #44

What will fastai projects, experiments, and collaboration look like going forward, for former students?

eg: A lot of work was done in the Part 2 forum (Dawn Bench, Cyclical Momentum, etc), but this is unaccessible for non-current students while the course is live.

Maybe the Alumni section will become a community research / library contribution area?

(Vinod) #45

What is the state of bleeding edge of Deep learning (more generally AI)? Whats the near term trend of the field?

(Mohan Dorairaj) #46

Is Deep Learning an overkill to use on Tabular data? When is it better to use DL instead of ML on tabular data?

(Kevin Bird) #47

Have you considered having a vlog series to talk about what you are working on currently or interesting papers that you come across?

(Ravi Jain) #48

Any words on building game agents(reinforcement learning)? Any suggestions on how and where to start for the same?

(Sophia Wang) #49

What suggestions would you give us to continue learning/ using deep learning? What’s the best way to collaborate with other fastAI students moving forward?

(Bharadwaj Srigiriraju) #50

There seems to be lot of interesting developments in the area of Quantum Machine Learning lately. What’s your general perspective on the QML side of things and how do you see this impacting us in the future?

Depending on your answer: Any chance you could cover this in next year fastai, or possibly as a different course?

(Ananda Seelan) #51

Just last month, I’ve seen a bunch of new methods for sentence embeddings popping up from the big guys like Google, MILA, Facebook, etc. And I remember a couple of lectures before, you mentioned about getting over them, and to start embracing task/domain based fine tuning backbones rather than universal embeddings. Wouldn’t a universal encoder still be more efficient, since it is just going to be a single training effort and easier transfer learning? Whereas fine tuning would be for every task, every new data domain, etc. What would be the single biggest reason where embeddings would fail? I would like you to give more insights on this point.

(Aditya) #52

Hi Jeremy,

Will the ml2 be open sourced if it’s recorded?
(It’s a blessing for me as because of this course Now I understand a lot of things which I didn’t previously…)


(Junxian) #53

Thanks Aseem!

(karenerobinson) #54

What is a “learnable convolution” and what is an example of a convolution that isn’t learnable?

(Manishankar) #55

Could you please speak about Deeplearning in Healthcare?

(Kam) #56

tabs or spaces?

(Erin Pangilinan) #57

What advice does Jeremy have on how one goes about selecting which first machine learning publications, conferences, or practical workshops to apply to and why? NIPS, CVPR, etc.?

(WG) #58

Hold on. Did I just hear Jeremy say he didn’t go to university?

If I heard right, would be interested in how you learned what you teach today.

(Hamel Husain) #59

What are you going to do next? Are you going to start another company? Are you going keep focusing on teaching deep learning and developing the library?

(Aseem Bansal) #60

@rachel Missed this one too.

(Aditya) #61

Study as much as you want as more knowlege is more beneficial always…
Plus doing mini projects and mending them your way is what the general notion on this forums…