Study plan after this course?

  • Get accquainted with Pytorch and FastAi Lib.
  • Re visit both the DL and ML videos multiple times.
  • Take Part in Kaggle Competitions.
  • Coding.(learn more about C++ STL and Python)
  • And try to understand The Deep Learning Book…

Peace’

1 Like

interested to work with you on Tamil MNIST

1 Like

Go back over lectures, run through notebooks while I tweak parameters and try different models, create one from scratch, hopefully get into a natural swing of fine-tuning, try using my own dataset, go over my linear algebra, maybe make a documentation contribution to the fastAi lib (If I can figure things out well enough), and study for coding interviews.

cool. I’ll let you know once I get going with it. I have been searching and collecting datasets.

I’d be interested in joining long term study group

I’ll create one and will keep you posted.

2 Likes

Here’s my plan:

  1. Thank Jeremy, Racheal, Yannet and forum members :slight_smile: Thank you people!
  2. Go through all the notebooks and lecture videos.
  3. More Kaggle competitions.
  4. Try and build a product.
  5. Forum :slight_smile:
  6. Contribute to Fast.ai library.
  7. Blog more often.

Also, I am in Mumbai for a week starting next week. Would love to catch up with fellows :slight_smile: Ping me please.

In my day job, I work on chatbots/conversational AI. I am also very interested in price prediction & recommender systems projects. So here is my plan

  1. Work on a side project related to chatbots
  2. Write blog post on chatbots/NLP
  3. Contribute to an open source conversational AI framework
  4. Work on Kaggle competition related to price prediction
  5. Review part1 lectures specially the ones related to NLP, structured data and recommender systems
2 Likes

I buried myself in projects:

  1. Working on Medical images has been slower than expected due to privacy concerns. Looking at Kidney glomeruli slides

  2. Kaggle competition for Mercari but need to get fastai into the Kernel. (It looks like others are working on it)

  3. I also got stuck in the https://halite.io/ competition. I can’t get machine learning to work on it right now (currently using python) but I feel that I can after a bit of time. Plus I like looking at the games even if my ships are very dumb during games.
    10 AM

  4. I just received $500 to play with tensorflow voice recordings. I plan on playing with lots of data augmentation for my solution.

  5. My company moonshot: Working setting up a small experiment looking into fraud at work. 10% chance there is something there worth moving to production but excited if it works.

12 Likes

Hey man! Lets meet up

1 Like

For me I feel like nothing really changes :slight_smile: There is still so much material from part 1 I need to work through :slight_smile: Maybe I will take the foot off the gas pedal a bit and start going to sleep earlier, that would be nice. But that time when my family goes to sleep can be so productive… ahhh!

Main objective for weeks to come: Monitor this forum and the fast.ai website and write a hopefully good email once application for part 2 opens :slight_smile:

Speaking of writing, I feel like another medium post is a bit overdue so should probably start working on that as well :slight_smile:

I might have also been bitten by the competition bug a little bit - maybe will try to enter a competition here and there especially as there are so many still running that are very much aligned with what we learned (icebergs, favorita).

To everyone who has not watched part 1 v1 - I have not finished lec 6 nor started lec 7, so maybe that was covered in greater depth, but there is no 1 to 1 overlap between this part 1 and v1 and there is one aspect that I think could be very useful for anyone willing to go deeper - the bias vs variance trade off and overall strategy for attacking a DL problem that is covered very nicely in part1 v1 (being able to see this reasoning applied to more problems really helps I think). In general, the material from Jeremy is so amazing I recommend getting your hands on as much of that as you can :slight_smile: I will certainly be coming back to those lectures.

Having said that, after watching part 1 v1 videos I thought nothing much better could happen, and I think with v2 Jeremy absolutely outdid himself! This continues to be a spectacular experience and the ML videos were a big, very tasty - in fact cake-sized - cherry on top :slight_smile: I am beyond words to express how grateful I am for being able to take part in this journey.

Other than that, I am hoping to be frequenting this forum and to see you all here :slight_smile: As we are still in the last week of the course and with the 19th of March fast approaching, plenty of work to be done :slight_smile:

6 Likes

maybe you know already openai gym https://gym.openai.com/docs/ if offers codes with which you can start very easy to do reinforcement learning

7 Likes
  1. My main plan/goal is to just “write more code” using the DL techniques learned and fast.ai in the coming months.
  2. I want to revisit the lessons on the specific topics as I encounter during #1.
  3. I am currently participating in Kaggle Icebergs Challenge. Immediate step for me is to continue on the same and achieve a decent submission.
  4. Contribute to the overall wiki/book for this course.
  5. Take these techniques on to mobile. I love to see these on devices using Core ML.

Eagerly waiting for part 2 and want my self to be well prepared for it by then.

3 Likes

Study plan:

  • Review the DL and ML videos
  • Tackle Kaggle challenges and document
  • Complete GDS Machine learning project and document
  • Apply DL techniques to Fast.ai project that I applied with
  • Build an ML - NLP product with a GUI
  • Join study group on Deep learning book
  • Keep on improving my python here and here
1 Like

I do, I actually have a library that sits on top of gym and tensorflow but there’s still a lot of algorithms to implement. My idea is now to try to implement all this algorithms on pytorch.

1 Like

I posted this in a previous thread but for those interested in Kaggle competitions, Coursera launched a course on “How to win Kaggle competitions” taught by several GrandMasters, including the current Rank #2 KazAnova from H2O.ai

You can audit the entire course for free but will need to subscribe (49$ a month) to run the different assignments/projects.
Note that it covers all kind of competitions and libraries, and is not using PyTorch, so a look at Jeremy ML1 course might help.

3 Likes
  1. Write an online article.

  2. Write a journal article (applied, not theory not yet anyway - maybe after Part 2 v2).

  3. I’m trying to apply what I’ve learned / learning in my research. We’ll see, but I hope (personal goal / new years resolution) to submit and cite fast.ai in the references before requests for Part2 v2 go out.

  4. Re-watch everything a 4th time (I’m definitely on the way for the later classes). I record my self code along with the videos and then try to figure out where I messed up on Tuesday mornings (very meta). I’m going for that memorized transcript that @jeremy mentioned :slight_smile: .

  5. sleep

1 Like

Hi ange, would love to know what ML-NLP product you are building. I am working on NLP/Chatbots

1 Like

Hi @nahidalam :slight_smile: was thinking NLP products that ingests lots of financial news sources and using it to derive market sentiment (buy/sell/hold). NLP Moody’s? Wil your chatbot be focused on a particular industry ?

1 Like

Nobody wants to go hunting, interesting.