Making the most out of Part 2 v2

Hello everybody!

It’s really a great time to be in. And as a part of the fast.ai community, I am thrilled and excited as you all are to get started and learn as much as possible in these upcoming 7 weeks.

Ever since I have worked through the Part1 v2 (PyTorch version), I can’t keep myself away from learning more and more about PyTorch.

As a preparation for Part 2 starting next week, I am/have been doing the following:

  1. Work through Part1 v2 of the course again as a whole to get a better grasp of minute details I missed out in the first iteration
  2. Focus on understanding and implementing simple(linear, logistic regression) to complex models(LSTM, GANs) in plain PyTorch. This would help me gain confidence in working with PyTorch and improve my grasping ability in Part 2
  3. Participating in as many Kaggle Competitions as possible, to get used to the end-to-end workflow of submitting a solution
  4. Even though I have worked with TensorFlow and keras for quite a while now; currently totally disconnecting myself from anything written in these two, so as to maintain a pure focus on PyTorch

Lets hope I complete most of the targets I have set for myself, before the start of the course.

During the course duration (19th March onward), I plan to follow this outline to maintain my focus and enhance my learning over and beyond the 7 weeks course timeline:

  1. Be an active student while attending the classes, and taking rigorous notes on concepts which are important to grasp in the moment
  2. Write blog posts explaining the concepts taught in the course, adding to them my learnings
  3. Read 3-4 papers weekly and summarizing them on my ML blog for future reference
  4. Implement the ideas covered in class within the same week , to solidify my understanding
  5. Collaborating with other fast.ai students to discuss and learn more about there learning experience
  6. Learning to leave aside my apprehensions and ego, to ask even the most simple questions I have during the course

For most of us including me this course isn’t just a technical learning experience, but an opportunity to discuss and debate ideas with people have the same level of passion for machine learning and deep learning.
And I hope to make the most out of this!
Wish you all the best for the course.

Will keep this thread updated!

Any comments/suggestions are welcome.

What do you think @jeremy @anandsaha @radek?

33 Likes

Hi, @mandroid6 .
Thank you for making this post.
I now have something revisit, whenever I feel lost.

I am currently implementing inception and resent in pytorch, for now.

And, I think you should not disconnect yourself from tensorflow or keras. Focus on pytorch, but keep in mind the work flow for both of these frameworks. In the long run, it will be helpful if you are to build a fastai wrapper for tensorflow.

Happy studying !

Hi @SHAR1,

Yes definitely won’t forget the entire idea behind tensorflow/keras as it would be helpful latter someday

Just for the duration of this course trying to minimize my interactions with them.

Thanks for the insight will surely keep this in mind :smiley:

2 Likes

Really great material for anyone to learn and practice working with PyTorch:



Request anyone who finds better resources for ML/DL to post them to this thread!

Good day! :slight_smile:

17 Likes

I will keep my main focus on Keras+TF given that it’s the tool I use daily for my job, but for the rest of the day Pytorch will be the chosen one!

Great to network with people working with these tools on a daily basis :slight_smile:
My job mostly concerns with working on spark, H2O.ai and JAVA.

Definitely alot to learn from you @DavideBoschetto let me know whenever you are available!

I like your plan. I’ll try some of it with you, especially the read papers,implement concepts and blog. Like Rachel said, blogging really helps solidify understanding.

One more resource to add that really helped me when starting out:

Videos, slides and code were very insightful.

Looking forward to discussing and working with you! :slight_smile:

And yes, these presentations are also great to get understanding of deep learning. Thanks for posting!

I am also planning to create a list of related research papers covered in the class, on the same day while the ideas/concepts are fresh.
This would help us all quickly read through them over the week, without spending much effort on finding the material.

What do you think?

Mine “most out of part1_v2” was understanding that “everything is possible in DS world”. Just look at Jeremy and see what a human can do. Lets concretise:

  • it is possible to read the “source code” - thats was the most valuable part of part1_v2. I did not read the code before. Reading source code can significantly boost your coding skills (if you are not a programmer already).
  • it is possible to read papers with modern state of the art approaches
  • it is possible to implement those approaches on your own or with some help
7 Likes

After looking at the kind of work and creativity shown by the fast.ai students, I have certainly started believing this.
@jeremy is currently an exception who makes us believe in our capacity to do even more. :smile:

I am going to start doing this today evening, after reading your posts and @anandsaha 's work on the forum. Thanks to both of you!

Yes, want to build this confidence myself. But haven’t had that feeling yet. I know that part 2 will definitely push me in that direction. Any pointers on starting out on some paper reading before the course?

@sermakarevich thanks for the link. Going to understand and work through it today evening. :slight_smile:

2 Likes

Part 1 (v2) was my first introduction to fast.ai and I really enjoyed it a lot. This time around, I’m hoping I can make more time to get involved in the forum discussions. I could barely catch up with the conversations last time.

One thing I certainly wish, is to get to know/talk to more people here, so we can continue casual discussions, sharing knowledge etc. on other platforms as well (for eg. twitter) after the course is over.

Looking forward to getting deeply overwhelmed :joy:

3 Likes

Hello everyone,

So as decided I have started publishing my blog posts on medium :smile:, really a great feeling to have your own content being read by people around the world.

This is the first of the many upcoming blog posts I will be sharing :
https://towardsdatascience.com/cnn-part-i-9ec412a14cb1

Let me know what you think. :slight_smile:

Also I have been writing blogs about the basics of machine learning and neural networks on my github page,
mandroid6.github.io.
But I guess the Medium platform is much easier to use and spread the content.

What do you think?

Thinking the same! :smile:
I have always felt that online courses like those on Coursera and Udemy lack the human interaction component, but the style of teaching by @jeremy & @rachel and the availability of a platform like this forum is a gift for us students.

Trying to enhance the communication and networking ability in various ways is another key learning for all of us. :slight_smile:

2 Likes

I had a 2nd go through the pascal notebook using the Google Colaboratory, thanks to @sourabhd 's post.

Every single step written in the notebook is very important from a software engineering perspective, in addition to deep learning:

  1. List comprehension
  2. Using dictionary is really important throughout, also got to understand defaultdict usecase
  3. Will start using pathlib across all projects
  4. Constants instead of strings, since we get tab-completion and don’t mistype; This is pretty basic, still never gave much attention to this before
  5. Efficiently using Visual Studio Code to read and understand library source code whenever any doubt arises. Most useful tool to work with any project repo
  6. Python debugger, existing but unnoticed earlier. The pdb.set_trace() is really great to step through a NN method call flow

A lot more learn and understand before next week’s lesson. Excited to discover more content and projects related to deep learning! :sunny: :smile:

3 Likes

This has been for me the TL;DR for lecture 1!
Simple things, but things that I was not using (pathlib, constants, defaultdict especially).

1 Like

Well, apologies for a slightly longish post. Now that I have put a disclaimer, …

I have been an avid follower of fast.ai since part1. However, I’ve started focusing on the lectures and learning the intricacies since the part2 v1 course. After being an international fellow for the same, I realised I was able to spend 10 hours only a few weeks but not every week. It was difficult to make so much time and my immense respect to those who are able to do it while doing a full time job.

Exactly, a month ago, on 22nd of February, it struck me - why shouldn’t I take up this course in-person? I immediately applied to the programme and contacted the Data Institute about my candidature since I had ~20 days to receive admission, apply for a US Visa, figure out my office commitments etc. Infact, I gave up a week later on my in-person plans. I’ll take this moment to specially mention about @Moody who was instrumental in inspiring me to put my efforts on the visa and office commitments regardless of the admission result. She said “don’t hope for a miracle. Give your best efforts and activate the miracle.” A couple of days later, @jeremy posted about the study hours at the Data Institute and I figured this is my best chance to take the leap of faith against the visa timelines etc so that I could fly down, focus only on deep learning for the next 45 days and collaborate with all the amazing peers here.

I spoke to the managers at my workplace and they issued me a sabbatical. I applied to the visa and I was immediately granted one. I booked my flight tickets before I completing my visa interview and guess what, I received my passport a day before my flight. I pinged a couple of my friends in SF and they are ready to host me at their place. Wow. It felt as if the jigsaw puzzle was coming to life one-block-after-one and everything just fell in place. Today is 22nd of March and it’s exactly a month since I started this endeavour, and everything seems set!

Right now, I’m writing this sitting in a small airport in India, waiting for my flight to travel over the oceans - 13000km all the way to San Francisco! Just for this course and learning environment. :slight_smile: I’m so happy I’m able to make it to the campus. I wouldn’t have done any of this had it not been for Jeremy, @rachel, their team at USF and all the students’ efforts on the forums. You people truly make it a wonderful place to learn.

Now that the logistics have been figured out, I’ve made a few plans to make the most of this course.

  • I plan to regularly attend the study hours, learn with the peers and work on Jeremy’s coding exercises. :wink:
  • I plan to write about my deep learning work on my blog. I’ve only written about my previous ML experiences and hackathons but it’s time I write more on the DL aspects.
  • I plan to become much more active and involved in the forums and help as many students as I can to the best of my ability. I’m sure I’ll make mistakes but hey, “learning is free” in this process!
  • Work on a couple of high impact problems. I’d like to speak to as many peers as I can and take up a long-term project. Like a capstone. I personally think this is something that’s missing in fast.ai. A lot of us had questions like “what next after finishing class exercises?” Enforcing a capstone project wouldn’t be a bad idea IMHO. @All, any thoughts on this?
  • Most importantly, have fun along the way.

I look forward to meeting all of you soon.

Phani.

29 Likes

Hats off to you :star_struck: @binga!
I mean WOW, feeling so excited and happy to just even know that someone has done this :

Really a big leap of faith you took here and definitely it’s going to pay off. :sunglasses:

Now even I wish that I had the guts :thinking: and initiative to do something like this :crazy_face:

3 Likes

Now, that’s a bold and fun move. :clap: :cake: :champagne:

Congrats on getting all those logistics sorted! Looking forward to seeing you here at USF :slight_smile:

7 Likes

After today’s lecture (lesson9) it’s clear what pace is going to be maintained throughout this course. :crazy_face:
Hence it is important to start working on the exercises and readings from day one to retain our grasping pace.

Below is a rough list of all possible readings and resources for this week:

Research Papers:

  1. YOLO - https://pjreddie.com/media/files/papers/YOLOv3.pdf
  2. SSD - https://arxiv.org/pdf/1512.02325.pdf
  3. RetinNet - https://arxiv.org/abs/1708.02002
  4. MSC-MultiBox - https://arxiv.org/abs/1412.1441

Related Articles and Videos:

  1. Understanding SSD for real time object detection -
    https://towardsdatascience.com/understanding-ssd-multibox-real-time-object-detection-in-deep-learning-495ef744fab
  2. Understanding Anchors through Excel -
    https://docs.google.com/spreadsheets/d/1ci7KMggF-_4kv8zRTE0B_u7z-mbrKEzgvqXXKy4-KYQ/edit?usp=sharing
  3. Spatial Transforms -
    http://pytorch.org/tutorials/intermediate/spatial_transformer_tutorial.html
  4. RCNN CS231n -
    https://youtu.be/nDPWywWRIRo?list=PL3FW7Lu3i5JvHM8ljYj-zLfQRF3EO8sYv

Important Additional Readings:

  1. Understanding cyclic learning rate -
    https://arxiv.org/abs/1506.01186, http://forums.fast.ai/t/understanding-use-clr/13969
  2. Utilizing the efficiency of pandas as suggested by @binga in his notebook -
    https://gist.github.com/binga/336258dd5965e77df6b8744b87154164, https://tomaugspurger.github.io/modern-1-intro.html
  3. Pathlib understanding -
    http://pbpython.com/pathlib-intro.html
  4. Great resource to understand VAEs -
    https://towardsdatascience.com/intuitively-understanding-variational-autoencoders-1bfe67eb5daf

This list is in no manner exhaustive, so please add any additional readings/resources you find useful. :slight_smile:

Super charged after today’s lesson :star_struck:

What do you suggest our approach should be with respect to other video resources like the CS231n lecture above? Though they are great, but require a time investment which could be could also be spent implementing the models taught in today’s lesson. @jeremy

12 Likes