Lesson 14 AMA (Ask Jeremy Anything)


(Rachel Thomas) #1

This week we’ll have (time permitting) some ‘Ask Me Anything’ questions. Your question doesn’t have to be related to this lesson, although it should be something that you think Jeremy or Rachel might be reasonably well qualified to answer! :slight_smile: e.g. Ask about deep learning, building data-oriented startups, data-driven medicine, Kaggle, MOOC development, etc…

We’ll primarily use ‘likes’ to prioritize questions, so please vote for questions you’re interested in.


(Vishnu Subramanian) #2

Is there a opportunity to work as intern with fast.ai remotely from India.


(Even Oldridge) #3

Will there be follow up lectures on individual topics?

I was hoping we’d cover deep learning for structured data and putting models into production but I’m guessing there’s not time to cover it.


(Rachel Thomas) #4

@even we will be covering structured data tonight


(Sam Witteveen) #5

Have you given any thought to doing a regular online broadcast (maybe monthly or biweekly ) to go through new papers and new learnings? I know many people would appreciate it.


(Even Oldridge) #6

Awesome! :smiley:


(Otto Stegmaier) #7

I would pay a subscription fee for a weekly/monthly review of a new paper. Take my $$$


(Brendan Fortuner) #8

Yep this is a brilliant idea. Monthly works.


(Cody) #9

Do you have any advice for people looking to transition into working as a deep learning practitioner, if they have a machine learning background, (and, say, have taken a course like this) but don’t have a PhD? I feel like I’m both constantly hearing both that companies want new people who have competence or knowledge in DL, and also seeing companies have strict PhD requirements for said positions.


(thejaswi.hr) #10

How do we better utilize multi-GPU systems in Keras with either Theano or TensorFlow backend? From what I could gather you may need to split the data into batches and then train them on different GPUs and it is not straight-forward. Having a multiple GPU systems make it much much easier to iterate and try different models.


(nima) #11

Are there any plans to formalize some of the utilities that have been made for the class and some of the models that have been made in the class to let practitioners start their work faster and make their lives easier?


(alenavkruchkova) #12

there is a really cool meetup at UCSF where they get together every week and discuss recent paper(s): https://www.meetup.com/deep-learning-sf/


(David Woo) #13

What are interesting startups in san francisco that you have seen that are applying deep learning?


(Thundering Typhoons) #14

I have learnt a lot from Jeremy’s notebooks. Can Jeremy share some interesting implementations in the future?


(David Gutman) #15

@thejaswi.hr

Check this out for multiple gpu utilization in Keras:

https://medium.com/@kuza55/transparent-multi-gpu-training-on-tensorflow-with-keras-8b0016fd9012

It makes data parallelization very easy. Essentially that code just runs a copy of the model separately on each gpu, then concatenates the output on the CPU (so you can effectively run batch sizes that are #GPUs x larger).

Haven’t tried it on Keras 2 yet but that script works well for Keras 1 on TensorFlow.


(Constantin) #16

As there are many topics covered in this course crying for a mobile app to be built, can you recommend good resources on how to deploy DL in moblle apps? More general pointers towards DL deployment in production are also highly welcome.


(Dennis O'Brien) #17

How do we when the model is doing as well as possible given the noisiness of the data? Is there a way to get an idea of the upper limit of performance?

For some domains, they use inter-rater agreement to get an estimate of “good enough”. Some problems, like predicting click-through rate, most of the (human) factors are unknowable and you just hope your model is well calibrated. Are there rules of thumb for getting an idea of the size of the signal and the noise?


(Kouassi Konan Jean-Claude) #18

I think one more valuable step is to teach to people how to transform their notebooks implementation into Desktop or Mobile apps.
I am planning to work on, and I think it would be worth if you organize some sessions on.


(Even Oldridge) #20

Is it possible to create a fast.ai slack channel where we can all communicate in realtime when we’re available?

I chatted with @xinxin.li.seattle going through a paper and it was a really powerful way to not only get motivated but to share knowledge.


(Brendan Fortuner) #21

I like this idea. I started a channel with @Matthew and @sravya8 for the same reason , but it’d be nice to have something official. We can revive our old Data institute channel?