What kind of topics can we expect in part two of the course next spring. How will it be different then part II version 2
How did you learn to be such a wonderful public speaker? How did you learn to educate in the way that you do?
Do you have a favorite architecture for resource-constrained environments?
Would you just use a smaller Resnet? Or is there another architecture you prefer when you need to do inference really quickly?
How would you teach deep learning to a 13-year old?
How to retrain my model after creating it from a different dataset, on a new but few datasets and classes ??
For example, if I created a model with 5 classes and 50 data instances per class. Now I want it to retrain with new data and new class , say 5 data points of 6th class without training it from scratch (i.e without building a whole new databunch with all the data/classes) ??
What’s your typical day like? How do you manage your time across so many things that you do?
You mentioned you used to spend 50% of your time learning new things everyday since you were 18. Now you spend 90% of your time doing that. My brain gets toasted after 2 hours of focusing with intensity. Any tips on how you do that for such longer hours?
Can you tell us about the Super-Bike in your Twitter Cover photo?
(Is that you? :D)
The library and course are already amazing. What further developments is the fastai team working towards, Can you give us a little teaser?
What should be an ideal learning path for a person like me who aspires to be a great DL Practitioner but has a day job with 2-3 hours to invest in learning/practising per day? How can I take what I have learnt in fast.aiV3 Part 1 and take it to the next level?
How do you see Google’s AutoML and similar services affecting DL practitioners in the future?
Thank you so much for doing an AMA. What are greatest influences on your thinking? Where do you find your inspirations and what are some important principles that guide you in work and in life?
when will the internatinal fellowship of part 2 be annonounced
When will be the international fellowship of machine learning course be announced
Isn’t DL a more of Plug and play (added tricks, changing few things, substituting one for the other ) and then people try to reason out?
Like isn’t it the reverse of how Science works actually?( First the theory comes)
If I may speak, @ecdrid I think scientific laws are determined by extensive experimentation as well.
So for both, its experimentation, “Plug and play” driven by intuition/anti-intuitive appraoches.
part 1 v2 there were many datesets from kaggle, and kaggle was recommended as great way to learn DL.
part 1 v3, only one? dataset form kaggle and i don’t recall hearing recommendation to do kaggle competitions. what’s the reason behind that?
there is an opposing view, nassim taleb’s book Antifragile would argue that many a times science happens the other way: the practice comes before the theory
- How to design custom loss functions.
- How to validate if the new loss function is suitable.
- How to combine multiple loss functions and validate if they will work well.
- After understanding the problem how to say that this loss function will not work but I should combine it with some other loss function