Let us know your requests or ideas for Part II of the deep learning certificate here! It’ll be starting in late Feb 2017. More details shortly.
A capstone project would be great. For inspiration: https://www.youtube.com/watch?v=dAl2gimGIpU or https://vimeo.com/146492001
Probably an over kill and will also require tensorflow for deployment on mobile devices?
Trying deep reinforcement learning (e.g., training AlphaGo, or a computer to play a video game, or a race driving simulator to have the fastest times) and biotech applications like drug discovery would be cool.
Here are some things I’d like to learn more about:
- Generative Adversarial Networks.
- Image Captioning
- Anomaly detection with deep learning / anything time series with deep learning
I’m especially interested in anything that improves human performance, resilience and longevity.
I am interested to learn about following:
- Advanced NLP related problems (Sentiment Analysis)
- Time Series Predictions and forecasting
- Early prediction/detection of a child disease or its symptoms and better the state of the art through CNN’s. (Example : Predicting symptoms of a disease in children images)
Customer service Chatbots to resolve the customer cases in a speedy fashion with the help of customer inputs.
Any of the below:
a) RNNs (e.g. LSTMs) for text analysis tasks beyond simple text classification: e.g. relation or entity extraction, target-specific sentiment analysis
c) More about neural item embeddings for CF systems
d) Definitely interested in chatbots but I know DL for chatbots is incredibly hyped right now and having worked on NLI interfaces, i know how hard chatbots are :)). But definitely interested in QA systems in general.
@jeremy About Chatbots, I’d say anything that looks like a chatbot would be pretty cool. Not sure whether its a practical or pragmatic thing to discuss. I guess in general I’d love to learn in detail about some systems that use deep learning that look, superficially, more like ‘artificial intelligence’ rather than straight classification or prediction problems. I say superficially, because I know under the hood its likely to always be some kind of prediction or classification problem, but maybe general cases where prediction / classification problems have been morphed into applications that look like AI to an outside observer.
Apologies if this is a very vague comment!
- How to read research papers and incorporate the latest techniques into our models
- Designing Deep Learning experiments in the absence of kaggle-like structure
- Time series (e.g. biometrics, financial)
- Unsupervised learning
- 3D medical imaging
- Deep learning with very small datasets
- How to collect and preprocess messy real world data
- How to start a company that applies Deep Learning
What are some of the topics you’re considering? Curious to hear your ideas
I’m thinking of the following topics as being examples:
- Inception architectures
- Captioning (Combined CNN/RNN encoder/decoder models)
- Localization (e.g. U-net architecture)
- Clustering (e.g. siamese and triplet architectures)
- Attentional models (e.g. for handling large images)
- Memory networks, and applications to Q&A (a la ‘chatbots’)
- More unsupervised and semi-supervised learning (including GANs)
- 3D (including video)
- Time series analysis and anomaly detection, if we can find a good method
- Active learning
…plus of course all the stuff that is going to be invented between now and then!
I would hope we’d look at some medical imaging applications as examples of handling large and 3d images.
Things like cleaning messy data and creating a company seem out of scope of this course. Perhaps we can think about some other venue for covering these important topics?
I really like @brendan 's suggestion of teaching how to read research papers and incorporate the latest techniques into our models. Anything else taught is going to be outdated soon anyway
@brendan, can you say more about your suggestion of deep learning in the absence of kaggle-like structure? Is that just dealing with the data collection and pre-processing, or do you see another component to it?
One thing following from @brandan ’ s point about reading papers. I often get thrown off by the math, at the same time I suspect it looks more complicated than it is most of the time, or the authors have done a bad job. Somewhere on the wiki there was a mathematical notation reference. I found that insanely useful.
Yeah I find that reading the little pseudocode section they generally have is much more useful than the math. If you google for any of the techniques we’ve learnt so far and read the original paper that introduced them, you’ll find they all look hideously complex. But we know now that they’re really simple!
@jeremy the topics you listed sound great. You have my vote.
@rachel by absence of Kaggle like structure I was referring to a situation where we don’t have clean datasets (or any data at all yet), predefined test and training sets, and leaderboards. I would love to learn tips for how to approach a real life problem where datasets and features haven’t been clearly defined for us. But I agree this might be out of scope for part II , particularly since we have so much more to learn technique wise. I’m excited by the topics Jeremy outlined above. And I hope there is a Part III someday
Yeah… I can’t think of anything deep learning specific about that. It’s a really important topic however. Have you (or anyone else) come across good resources on this?
@rachel your comment about research papers is exactly what I had in mind. How can we stay competitive when finetuning VGG becomes commoditized, and Google / Microsoft release more and more predictive APIs, and Deep Mind/Watson get into Health Care?
@jeremy I think this is the sort of tribal knowledge insiders have at the moment , but the first step is mastering the tools , and learning the topics you mentioned above is definitely my priority
Well, if no-one can find a good resource for it, then we should probably add it to our intro workshops - cover not just python/numpy basics, but data science process basics too. But I’m hopeful there’s a good book or similar out there somewhere…