but then, at least in US suburbs, there are not that many stores left - pretty much all supermarkets (if we’re talking about food/wine)
Yup. Very true - esp for small independent wine/liquor stores. They are only left in big cities/downtowns.
However, the problems of theft prevention, monitoring employees (for non-owner operated stores), inventory control etc do apply to all kinds of stores.
but then, at least in US suburbs, there are not that many stores left - pretty much all supermarkets (if we’re talking about food/wine)
That’s a small fraction of the world. For any product, in any population, at any density, I believe this kind of data could be useful. It would make both consumers and businesses happier by saving time and money.
i don’t disagree, but Sanjeev’s business is in that particular small fraction of the world so i was thinking in that context…
I am voilin from Taiwan. I got a bachelor degree in Physics, and I currently work in advertising industry. My work is to apply Deep Learning to applications like Recommendation System. I self-learn ML and DL since last year, and I am planning to work abroad next year. In this course, I hope I can learn more on PyTorch, and build something incredible, especially in music generation for my personal interest.
I accidentally taught myself Ruby on Rails, left my fairly successful corporate career behind and have been working on a relatively complex rails app for nearly two years now.
I think that in 2013 I took the Coursera Machine Learning course by Andrew Ng and fell in love with neural networks. Well, it was a weird phase in my life and I completed a lot of online MOOCs on a lot of subjects, including some heavy duty machine learning courses (the one from Geoffrey Hinton and Abu-Mostafa come to mind). Not knowing where to channel my appreciation for neural networks, I did some weird things (a prime example being this repository), tried learning a lot of math and failed miserably, probably something like two times thus far
I even wrote a rather embarrassing post about my failures but well… too embarrassed to post it (and probably it might be boring anyhow) but maybe I will at some point!
Fast forward to part2 v1 of this course where I met @jeremy and @rachel. They do seem to share a lot of useful information on deep learning (lol @ the understatement because what they teach and how they do it is L - E - G - E - N - D - A - R - Y) and they also seem to share a lot of meta on learning. Well, I have been particularly screwed in life by being a very obedient, learning well kid and it sort of carried into my adult life. Unfortunately, I was the sort of person @jeremy described in his lecture yesterday that would find all the theory books they could and would only take any action when they literally understood each and every definition… ah this did lead to some outstanding failures on learning math in my adult life but I digress.
I got frustrated with Deep Learning getting nowhere in the last two or so weeks of the fisheries competition and left for 5 months to deepen my understanding of web dev which is my daily bread and butter that I have grown quite weary of by now.
The interesting thing is that the ideas on learning shared by @jeremy and @rachel and some that I got from this book seem to have incubated in my mind over the course of inactivity and I am back doing things differently now I specifically make it a point to follow @jeremy’s and @rachel’s advice hoping it will get me somewhere I have not been before, though to start blogging and using twitter required quite an epic leap of faith
What am I hoping to get out of this course? That is a tough question. I am willing to give deep learning another go and do it well this time. That is really it. I don’t have any domain knowledge and not sure how I would build it. I am also very appreciative of the market forces and despite so many people being very excited about getting a job in deep learning / machine learning I think my chances of ever getting there are relatively low. I am very familiar with the corporate interview process, having set on both sides of the table, and if the companies are looking of PhD type of people, than regardless if that makes sense for them or not, this is what they do. I think I also make this specifically a point not to keep my hopes up so apologies if I sound overly pessimistic, more so than the situation would really entail! All this leads me to thinking that at some point I might set out doing something on my own, but that is probably also very much down the road for various reasons if ever at all.
So just hoping to enjoy the ride with all of you while it lasts. I put this post together on what I am planning to focus on for next 7 weeks, will revisit it when the course ends and will set set a new course for the interim period before the 2nd part - who knows, maybe I will be extremely fortunate to make it into that session as well, and if I am accepted I will once again set the course for the duration of the course. My plans for now do not reach any further than that
Let’s enjoy the ride, learn as much as we can and see you around on the forums!
Hi all! My name is Dara. I’m a computational immunologist by training. I actually started my PhD as a wet lab biologist. While working on predicting HIV acquisition in Kenyan women based on a blood test, I quickly got the statistics/ML/programming bug. The majority of my graduate work ended up being computational, so when I graduated I decided to give data science a try. So, for the past year and a half I’ve been working as a data scientist at Collective Health, a super cool health insurance startup in SF.
I’ve done a good amount of data and ML work but this is my first venture into deep learning. Very excited! I’m working on formulating a biology-driven project in which to apply my learnings. Look forward to working with you all!
Let me start by thanking @jeremy and @rachel for allowing us to join them on their mission to make neural nets uncool again. So happy to be able to join this course and all you motivated individuals. The energy in these forums is just infectious, reminds me of my school days. So, here it goes, maybe a bit of a lengthy intro, which I blame on all of you
I’ve been working as a software/data developer for some years now, and have been involved in building internet/web based systems. I am particularly interested in topics such as Programming languages, Functional programming, Distributed systems, Statistical & Machine learning.
The machine learning MOOC launched by Andrew Ng in 2011 (before it grew to become coursera) was my first introduction to ‘modern’ machine/statistical learning.
Fast forward to the present. I’d heard about this course sometime around February, and had been planning to go through it ever since. Luckily, I heard about this year’s run and the fellowship program, applied for it, and here I am.
My main goal with this course is to develop enough intuition/understanding about Deep learning through examples, so much that they actually feel un-magical and rather boring. Thanks again Jeremy for the AMIs, notebooks, fast-ai library etc., so we get to spend more of our brain cycles on the ‘Why’ and less on the ‘How’ for now.
Going forward, I’d like to able use deep learning in real world products, especially in underserved industries and for social good, wherever possible. I also strongly believe that, as we go forward there’ll be even more ML models governing our lives; and as technology builders, it is our responsibility to question/inspect models and relevant datasets that might bring along biases etc., so that we ‘measure twice & cut once’ and help build a future with products that are empowering, ethical and inclusive.
Looking forward to all the fun and engaging discussions here.
come to think of it, video analytics is a very hot subject - intelligent video analytics mentioned twice in the announcement that just came out:
NVIDIA Announces New AI Partners, Courses, Initiatives to Deliver Deep Learning Training Worldwide
I’m a software engineer. I have been programming since 6yo, so almost as long as I remember myself I was working primarily on web applications and eCommerce, mostly using Microsoft technologies (.NET, C#, SQL Server). Last few years I have been developing a lot on data-related things, chatbots and machine learning applications. Currently, I’m managing a team of machine learning engineers at JustAnswer.
I live in SF Bay Area (Redwood City).
My goal for this course is to learn more about the black magic of deep learning, tips and tricks of training models for real world applications.
I haven’t chosen a problem I’ll work on throughout the course, but most likely it will be something related to self-driving cars.
I’m Jacopo, and I’m a Junior Fellow at CERN. In particular I’m working on INSPIRE, a search engine for literature in High Energy Physics. My Bachelor was in Mathematics while my Masters is in Computer Science, with a focus on Information Retrieval.
I’m interested in Deep Learning because I want to build software to extract and curate metadata from the PDFs that we harvest from arXiv and publishers. For example, given a string like
 R.H. Schindler et al., Phys. Rev. D 24, 78 (1981)
I want to reconstruct that this is a reference to a paper published in 1981 on the Phys. Rev. D journal, volume 24, issue 78… As far as I know, the RNNs that we are going to learn later in the course will help me with that.
Note however that this is a “nice” subproblem: to arrive at this we first have to identify the reference section in the PDF itself, segment that in single references, reliably extract the text without missing characters… In fact, in the worst case this becomes as hard as doing OCR of typewritten documents with handwritten mathematical formulas.
Others are working on variants of this problem, among which arXiv itself (https://blogs.cornell.edu/arxiv/2017/09/27/development-update-reference-extraction-linking/) and eLife Sciences (https://github.com/elifesciences/sciencebeam). My aim is to be able to help them, or at least investigate the problem enough to know what absolutely can’t work : )
I’m Attila, working in full a time job as a telco engineer in Budapest, doing integration, testing and sometimes in BA roles. I’ve started programming 6…7 years ago to automate my job. I was writing simple Perl scripts in the beginning, then I’ve discovered Python and a couple of years ago AI & ML. I’ve took a few AI/ML courses on edX, Coursera and Udacity (eg ML nanodegree, Self-Driving Car nanodegree T1, Intro to Deep Learning)
My goal with the course is applying AI in telecom network element configuration and support and trying to get a data scientist or deep learning specialist position.
Another goal would be to build a baby cry - human translator – this would give me more time for studying
By training, I’m a computer scientist and bioengineer (medical image analysis).
I’m an instructor at Universidad Central de Venezuela and a software engineer at the Algorithmic Nature Lab.
I’m interested in using state of the art CNNs and GANs to tackle the problem of brain tumor segmentation. I want to participate in the MICCAI BRATS Challenge next year (see www.braintumorsegmentation.org)
Let me know if this project interests you!
Sorry, that one has already been done http://simpsons.wikia.com/wiki/Brother,_Can_You_Spare_Two_Dimes%3F
My background is in Psychology and Neuroscience, but I’ve been working in AI and Robotics since 2010 when I did internships at Anybots and Numenta.
After a couple of years as a software engineer at Numenta I briefly ran a robotics company of my own and then joined up as a robotics engineer at Fetch Robotics.
I’ve enjoyed working on algorithm design, backend platform code (python) and front-end UIs (react). In the ML world I’m really interested in agent based continual learning. In particular whether agents with simulated musculature and proprioception can be useful platforms for deep reinforcement learning.
Most recently I’ve been doing Andrew Ng’s DL specialization course series, watching updated cs231n videos, and trying to keep up with arxiv papers.
Late to the party, however, here is my answer.
I am in that part of my life where I think I have to make decisions for my life, take in more responsibility. I was introduced to Machine Learning way back in 2011-2012 by Andrew Ng, when I was in college . I always wanted to do Kaggle competitions, but believed that I did not had the tools. I think Part 1 of the course made me more confident about the empirical side of Deep Learning. I am interested in joining grad school coming next year. Hopefully the course material will give me a better understanding.
It seems that I am only the youngest, inexperienced undergraduate student having access to this wonderful forum…
Thanks a lot for really making neural nets uncool…
Trying to Learn and Improve Everyday…
My background seems to be somewhat different from most of you - and not in a way that would make it easier for me to grasp all the content
Although I have studied computer science, I have never been a developer in my professional life. Instead, I have worked with risk management of banks’ capital markets business during the past 13 years. We have a lot of structured data waiting to be utilised (I have many ideas…) - and I am in particular interested in learning techniques that would help me understand how to apply deep learning to it - like the Rossmann Kaggle competition example in part 2.
I have been following the fast.ai website - and especially @rachel 's excellent blog - for the past year, and when I saw the opportunity to apply for the international fellowship, I thought that this could now be me opportunity to dive into this, and learn something extremely exciting.
I am very grateful for the opportunity to participate in the course! I do realise that I will have to put in many more hours than most of you, but I am committed to this. Let’s see whether it will be enough
Cool! How do you feel like your web development background complements/aids what you’ve been doing with deep learning more recently?
Hi Everyone, I work at Visteon, Pune, India. My job is working on Board Support Package(BSP) involving Linux Kernel/Device Drivers.
I got interested in A.I field when i started watching TV Series “Person of Interest”. That interest got converted into engagement when AlphaGo won against Lee Sedol in 2016. Since then I am learning myself. I have completed Andrew NG course in Machine Learning and currently Deep Learning Specialization is in progress.
I am really excited to be the part of this course. Thanks @jeremy for this International Fellowship program.