Making the most out of Part 2 v2

Mine “most out of part1_v2” was understanding that “everything is possible in DS world”. Just look at Jeremy and see what a human can do. Lets concretise:

  • it is possible to read the “source code” - thats was the most valuable part of part1_v2. I did not read the code before. Reading source code can significantly boost your coding skills (if you are not a programmer already).
  • it is possible to read papers with modern state of the art approaches
  • it is possible to implement those approaches on your own or with some help
7 Likes

After looking at the kind of work and creativity shown by the fast.ai students, I have certainly started believing this.
@jeremy is currently an exception who makes us believe in our capacity to do even more. :smile:

I am going to start doing this today evening, after reading your posts and @anandsaha 's work on the forum. Thanks to both of you!

Yes, want to build this confidence myself. But haven’t had that feeling yet. I know that part 2 will definitely push me in that direction. Any pointers on starting out on some paper reading before the course?

@sermakarevich thanks for the link. Going to understand and work through it today evening. :slight_smile:

2 Likes

Part 1 (v2) was my first introduction to fast.ai and I really enjoyed it a lot. This time around, I’m hoping I can make more time to get involved in the forum discussions. I could barely catch up with the conversations last time.

One thing I certainly wish, is to get to know/talk to more people here, so we can continue casual discussions, sharing knowledge etc. on other platforms as well (for eg. twitter) after the course is over.

Looking forward to getting deeply overwhelmed :joy:

3 Likes

Hello everyone,

So as decided I have started publishing my blog posts on medium :smile:, really a great feeling to have your own content being read by people around the world.

This is the first of the many upcoming blog posts I will be sharing :
https://towardsdatascience.com/cnn-part-i-9ec412a14cb1

Let me know what you think. :slight_smile:

Also I have been writing blogs about the basics of machine learning and neural networks on my github page,
mandroid6.github.io.
But I guess the Medium platform is much easier to use and spread the content.

What do you think?

Thinking the same! :smile:
I have always felt that online courses like those on Coursera and Udemy lack the human interaction component, but the style of teaching by @jeremy & @rachel and the availability of a platform like this forum is a gift for us students.

Trying to enhance the communication and networking ability in various ways is another key learning for all of us. :slight_smile:

2 Likes

I had a 2nd go through the pascal notebook using the Google Colaboratory, thanks to @sourabhd 's post.

Every single step written in the notebook is very important from a software engineering perspective, in addition to deep learning:

  1. List comprehension
  2. Using dictionary is really important throughout, also got to understand defaultdict usecase
  3. Will start using pathlib across all projects
  4. Constants instead of strings, since we get tab-completion and don’t mistype; This is pretty basic, still never gave much attention to this before
  5. Efficiently using Visual Studio Code to read and understand library source code whenever any doubt arises. Most useful tool to work with any project repo
  6. Python debugger, existing but unnoticed earlier. The pdb.set_trace() is really great to step through a NN method call flow

A lot more learn and understand before next week’s lesson. Excited to discover more content and projects related to deep learning! :sunny: :smile:

3 Likes

This has been for me the TL;DR for lecture 1!
Simple things, but things that I was not using (pathlib, constants, defaultdict especially).

1 Like

Well, apologies for a slightly longish post. Now that I have put a disclaimer, …

I have been an avid follower of fast.ai since part1. However, I’ve started focusing on the lectures and learning the intricacies since the part2 v1 course. After being an international fellow for the same, I realised I was able to spend 10 hours only a few weeks but not every week. It was difficult to make so much time and my immense respect to those who are able to do it while doing a full time job.

Exactly, a month ago, on 22nd of February, it struck me - why shouldn’t I take up this course in-person? I immediately applied to the programme and contacted the Data Institute about my candidature since I had ~20 days to receive admission, apply for a US Visa, figure out my office commitments etc. Infact, I gave up a week later on my in-person plans. I’ll take this moment to specially mention about @Moody who was instrumental in inspiring me to put my efforts on the visa and office commitments regardless of the admission result. She said “don’t hope for a miracle. Give your best efforts and activate the miracle.” A couple of days later, @jeremy posted about the study hours at the Data Institute and I figured this is my best chance to take the leap of faith against the visa timelines etc so that I could fly down, focus only on deep learning for the next 45 days and collaborate with all the amazing peers here.

I spoke to the managers at my workplace and they issued me a sabbatical. I applied to the visa and I was immediately granted one. I booked my flight tickets before I completing my visa interview and guess what, I received my passport a day before my flight. I pinged a couple of my friends in SF and they are ready to host me at their place. Wow. It felt as if the jigsaw puzzle was coming to life one-block-after-one and everything just fell in place. Today is 22nd of March and it’s exactly a month since I started this endeavour, and everything seems set!

Right now, I’m writing this sitting in a small airport in India, waiting for my flight to travel over the oceans - 13000km all the way to San Francisco! Just for this course and learning environment. :slight_smile: I’m so happy I’m able to make it to the campus. I wouldn’t have done any of this had it not been for Jeremy, @rachel, their team at USF and all the students’ efforts on the forums. You people truly make it a wonderful place to learn.

Now that the logistics have been figured out, I’ve made a few plans to make the most of this course.

  • I plan to regularly attend the study hours, learn with the peers and work on Jeremy’s coding exercises. :wink:
  • I plan to write about my deep learning work on my blog. I’ve only written about my previous ML experiences and hackathons but it’s time I write more on the DL aspects.
  • I plan to become much more active and involved in the forums and help as many students as I can to the best of my ability. I’m sure I’ll make mistakes but hey, “learning is free” in this process!
  • Work on a couple of high impact problems. I’d like to speak to as many peers as I can and take up a long-term project. Like a capstone. I personally think this is something that’s missing in fast.ai. A lot of us had questions like “what next after finishing class exercises?” Enforcing a capstone project wouldn’t be a bad idea IMHO. @All, any thoughts on this?
  • Most importantly, have fun along the way.

I look forward to meeting all of you soon.

Phani.

28 Likes

Hats off to you :star_struck: @binga!
I mean WOW, feeling so excited and happy to just even know that someone has done this :

Really a big leap of faith you took here and definitely it’s going to pay off. :sunglasses:

Now even I wish that I had the guts :thinking: and initiative to do something like this :crazy_face:

3 Likes

Now, that’s a bold and fun move. :clap: :cake: :champagne:

Congrats on getting all those logistics sorted! Looking forward to seeing you here at USF :slight_smile:

7 Likes

After today’s lecture (lesson9) it’s clear what pace is going to be maintained throughout this course. :crazy_face:
Hence it is important to start working on the exercises and readings from day one to retain our grasping pace.

Below is a rough list of all possible readings and resources for this week:

Research Papers:

  1. YOLO - https://pjreddie.com/media/files/papers/YOLOv3.pdf
  2. SSD - https://arxiv.org/pdf/1512.02325.pdf
  3. RetinNet - https://arxiv.org/abs/1708.02002
  4. MSC-MultiBox - https://arxiv.org/abs/1412.1441

Related Articles and Videos:

  1. Understanding SSD for real time object detection -
    https://towardsdatascience.com/understanding-ssd-multibox-real-time-object-detection-in-deep-learning-495ef744fab
  2. Understanding Anchors through Excel -
    https://docs.google.com/spreadsheets/d/1ci7KMggF-_4kv8zRTE0B_u7z-mbrKEzgvqXXKy4-KYQ/edit?usp=sharing
  3. Spatial Transforms -
    http://pytorch.org/tutorials/intermediate/spatial_transformer_tutorial.html
  4. RCNN CS231n -
    https://youtu.be/nDPWywWRIRo?list=PL3FW7Lu3i5JvHM8ljYj-zLfQRF3EO8sYv

Important Additional Readings:

  1. Understanding cyclic learning rate -
    https://arxiv.org/abs/1506.01186, http://forums.fast.ai/t/understanding-use-clr/13969
  2. Utilizing the efficiency of pandas as suggested by @binga in his notebook -
    https://gist.github.com/binga/336258dd5965e77df6b8744b87154164, https://tomaugspurger.github.io/modern-1-intro.html
  3. Pathlib understanding -
    http://pbpython.com/pathlib-intro.html
  4. Great resource to understand VAEs -
    https://towardsdatascience.com/intuitively-understanding-variational-autoencoders-1bfe67eb5daf

This list is in no manner exhaustive, so please add any additional readings/resources you find useful. :slight_smile:

Super charged after today’s lesson :star_struck:

What do you suggest our approach should be with respect to other video resources like the CS231n lecture above? Though they are great, but require a time investment which could be could also be spent implementing the models taught in today’s lesson. @jeremy

12 Likes

If you have some extra time, then these 3 papers would be a great starter kit for object detection being discussed in today’s lecture:

1 Like

So here I am, continuing my plan to post blogs every week on topics covered/related to lessons. Though this one is unrelated to todays or last weeks lesson, let me know your thoughts :slight_smile:

Convolutional Neural Network - II

1 Like

Would be really interesting to see professional grade equivalents of Jeremy’s notebooks in tf+keras.

1 Like

@snagpaul Yeah, I agree, but at this pace it’s going to be tough! Today’s lecture was quite packed by itself! I’m very intrigued by feature pyramids, though… Really looking forward to lectures 13-14, as usual those will be the best ones!
From my point of view (working in Computer Vision) today’s lecture has been quite the best of the Pytorch timeline! SSD+Yolo easily explained, debugged and implemented in 2.5 hours… Amazing.

Really envious of all of the people that are following in person! And, BTW, congratulations @binga for your incredible efforts!

I have been working through yesterday’s lesson and writing my own notes along side the notebook. Getting some sense of satisfaction on the conceptual level following this strategy :smile:

Have already worked through the notebook in my own pace and am able to absorb what and how things are being down, but unable to reproduce the same independently.

I have read @radek’s and other people posting about how much efforts and time it takes to reach that level of confidence, and I still find myself thinking about how to reach there a bit faster. :thinking:

Any pointers in addition to practice that you can suggest? @radek

“Every day I sit on a sofa toward my dream”

2 Likes

is all there is

3 Likes

Found an interesting collection dataset for deep learning:

2 Likes