- Get accquainted with Pytorch and FastAi Lib.
- Re visit both the DL and ML videos multiple times.
- Take Part in Kaggle Competitions.
- Coding.(learn more about C++ STL and Python)
- And try to understand The Deep Learning Book…
Peace’
Peace’
interested to work with you on Tamil MNIST
Go back over lectures, run through notebooks while I tweak parameters and try different models, create one from scratch, hopefully get into a natural swing of fine-tuning, try using my own dataset, go over my linear algebra, maybe make a documentation contribution to the fastAi lib (If I can figure things out well enough), and study for coding interviews.
cool. I’ll let you know once I get going with it. I have been searching and collecting datasets.
I’d be interested in joining long term study group
I’ll create one and will keep you posted.
Here’s my plan:
Also, I am in Mumbai for a week starting next week. Would love to catch up with fellows Ping me please.
In my day job, I work on chatbots/conversational AI. I am also very interested in price prediction & recommender systems projects. So here is my plan
I buried myself in projects:
Working on Medical images has been slower than expected due to privacy concerns. Looking at Kidney glomeruli slides
Kaggle competition for Mercari but need to get fastai into the Kernel. (It looks like others are working on it)
I also got stuck in the https://halite.io/ competition. I can’t get machine learning to work on it right now (currently using python) but I feel that I can after a bit of time. Plus I like looking at the games even if my ships are very dumb during games.
I just received $500 to play with tensorflow voice recordings. I plan on playing with lots of data augmentation for my solution.
My company moonshot: Working setting up a small experiment looking into fraud at work. 10% chance there is something there worth moving to production but excited if it works.
Hey man! Lets meet up
For me I feel like nothing really changes There is still so much material from part 1 I need to work through Maybe I will take the foot off the gas pedal a bit and start going to sleep earlier, that would be nice. But that time when my family goes to sleep can be so productive… ahhh!
Main objective for weeks to come: Monitor this forum and the fast.ai website and write a hopefully good email once application for part 2 opens
Speaking of writing, I feel like another medium post is a bit overdue so should probably start working on that as well
I might have also been bitten by the competition bug a little bit - maybe will try to enter a competition here and there especially as there are so many still running that are very much aligned with what we learned (icebergs, favorita).
To everyone who has not watched part 1 v1 - I have not finished lec 6 nor started lec 7, so maybe that was covered in greater depth, but there is no 1 to 1 overlap between this part 1 and v1 and there is one aspect that I think could be very useful for anyone willing to go deeper - the bias vs variance trade off and overall strategy for attacking a DL problem that is covered very nicely in part1 v1 (being able to see this reasoning applied to more problems really helps I think). In general, the material from Jeremy is so amazing I recommend getting your hands on as much of that as you can I will certainly be coming back to those lectures.
Having said that, after watching part 1 v1 videos I thought nothing much better could happen, and I think with v2 Jeremy absolutely outdid himself! This continues to be a spectacular experience and the ML videos were a big, very tasty - in fact cake-sized - cherry on top I am beyond words to express how grateful I am for being able to take part in this journey.
Other than that, I am hoping to be frequenting this forum and to see you all here As we are still in the last week of the course and with the 19th of March fast approaching, plenty of work to be done
maybe you know already openai gym https://gym.openai.com/docs/ if offers codes with which you can start very easy to do reinforcement learning
Eagerly waiting for part 2 and want my self to be well prepared for it by then.
I do, I actually have a library that sits on top of gym and tensorflow but there’s still a lot of algorithms to implement. My idea is now to try to implement all this algorithms on pytorch.
I posted this in a previous thread but for those interested in Kaggle competitions, Coursera launched a course on “How to win Kaggle competitions” taught by several GrandMasters, including the current Rank #2 KazAnova from H2O.ai
You can audit the entire course for free but will need to subscribe (49$ a month) to run the different assignments/projects.
Note that it covers all kind of competitions and libraries, and is not using PyTorch, so a look at Jeremy ML1 course might help.
Write an online article.
Write a journal article (applied, not theory not yet anyway - maybe after Part 2 v2).
I’m trying to apply what I’ve learned / learning in my research. We’ll see, but I hope (personal goal / new years resolution) to submit and cite fast.ai in the references before requests for Part2 v2 go out.
Re-watch everything a 4th time (I’m definitely on the way for the later classes). I record my self code along with the videos and then try to figure out where I messed up on Tuesday mornings (very meta). I’m going for that memorized transcript that @jeremy mentioned .
sleep
Hi ange, would love to know what ML-NLP product you are building. I am working on NLP/Chatbots
Hi @nahidalam was thinking NLP products that ingests lots of financial news sources and using it to derive market sentiment (buy/sell/hold). NLP Moody’s? Wil your chatbot be focused on a particular industry ?
Nobody wants to go hunting, interesting.