Update: this lesson is now complete, so please don’t add more AMA questions to the thread.
I’ll leave some time free in the lesson for you folks to ask me about anything at all! Preferably, something I may be competent to answer… Please ask your questions in this thread, and if you see a question you’d like answered, up-vote it. I will endeavor to answer the highest-voted questions.
How do you stay motivated?
I often find myself overwhelmed in this field, there are so many new things coming up that I feel like I have to put so much energy just to keep my head above the water line.
Another thing that constatly bothers me is that I feel the field is getting more and more skewed towards bigger and more computationaly expensive models and huge ammounts of data. I keep wondering if in some years from now I would still be able to train resoanable models with a single gpu or if everything is going to require a compute cluster. In NLP we clearly see that the tedency is to keep adding more parameters and more data. In computer vision the situation seems a bit better, but some tasks like image generation are already out of my reach.
What are your thoughts on Minsky’s Society of Mind (quick summary of the book) and Jeff Hawkins’ ideas? Do you feel it would be beneficial for Deep Learning (as it exists today) practitioners to spend time on the Neuroscience side of the fence? For example the research done by Nancy Kanwisher and other folks in that field.
How do you homeschool young children science in general or math in particular? Would you share your experiences of teaching math/science to young children by blogging or maybe even in lectures someday ?
Yes, I have read Paul Lockhart’s first book several times. I really love his way of animating math concepts to children. What is your practical philosophy of introducing math to kids? How do you do it from scratch for a young child like yours? I could imagine it must take a lot more efforts and creativity to introduce kids math than teaching DL to adult folks.
One more thing, why and how do you teach young children math concepts with APL? Should APL be a tool of thought for children and anyone who feared math back in school but want to learn math now because of curiosity?
On the topic of “Top-down Learning”, are there other subjects where this can be applicable? If yes, do you have any pointers regarding them? For example, how can one use this approach to learn math, physics, language etc(or maybe even theoretical computer science)?
What are your thoughts on causal inference and causal modelling? Eg. dowhy. Should it be incorporated with deep learning?
Since the course is wrapping up, what do you feel is the next steps any student who was pursuing the course should do?
The walk-thrus have been a game changer for me. The knowledge and tips that you shared in those sessions are skills required to become an effective ml practitioner and utilize fastai more effectively.
Have you considered making the walk-thrus a more formal part of the course, doing a separate software engineering course or continuing the live-coding sessions between part1 and part2?
How do you turn a model into a business?
Specifically, how does a coder with little to no startup experience turn a ML based gradio prototype into a legitimate business venture?
Have you set the date(s) for fast.ai unconference in Brisbane? So, interstates/international attendees can start planning for it.
Jeremy mentioned that regex as a notation worths mastering can be studied intensively or learning as you go. What does studying it intensively look like? How would Jeremy propose to do it intensively?
No not yet. Waiting to hear back from @jwuq about options for part 2 of the course…
When we learn from you, everything makes very sense and is enjoyable. What’s your preparation pipeline look like when teaching online? What advice/tips would you give to someone who wants to explain things much more clearly?
Also as someone who’s been teaching for nearly 30 years, what are your thoughts about the future of education?
Jeremy, can you share some of your productivity hacks ?
From the content you produce it may seem that you work 24 hours per day!
Similar to starting with a Random Forest then continuing on with other ML techniques when tackling Tabular applications, would it be possible to share tips in tackling different AI applications? Do you have a preference on what to do first, second, third, etc based on different applications? (RecSys, Computer VIsion, NLP, GAN, Reinforcement learning, Robotics, etc.)
What are your thoughts on ‘sentient’ AI as claimed by an employee of Google’s?
In your APL explorations have you looked into working with geometric algebra?
Would it be feasible to cover some of the latest cool computer vision techniques like neural rendering, text-based image-generation, 3D reconstruction etc in part 2 or are those too advanced? If the latter, can you recommend any accessible and practical learning resources for coders similar to your course to learn about these techniques?
I would love to get the
timm models working with
unet but i’m a bit confused why it seems to be using up all the GPU memory weirdly. I opened a PR, but I don’t know what the next steps are. Where is the best place to discuss these kinds of things and can you recommend any guides on how best to debug the models to figure out whats wrong?