Making the most out of Part 2 v2

I had a 2nd go through the pascal notebook using the Google Colaboratory, thanks to @sourabhd 's post.

Every single step written in the notebook is very important from a software engineering perspective, in addition to deep learning:

  1. List comprehension
  2. Using dictionary is really important throughout, also got to understand defaultdict usecase
  3. Will start using pathlib across all projects
  4. Constants instead of strings, since we get tab-completion and don’t mistype; This is pretty basic, still never gave much attention to this before
  5. Efficiently using Visual Studio Code to read and understand library source code whenever any doubt arises. Most useful tool to work with any project repo
  6. Python debugger, existing but unnoticed earlier. The pdb.set_trace() is really great to step through a NN method call flow

A lot more learn and understand before next week’s lesson. Excited to discover more content and projects related to deep learning! :sunny: :smile:

3 Likes

This has been for me the TL;DR for lecture 1!
Simple things, but things that I was not using (pathlib, constants, defaultdict especially).

1 Like

Well, apologies for a slightly longish post. Now that I have put a disclaimer, …

I have been an avid follower of fast.ai since part1. However, I’ve started focusing on the lectures and learning the intricacies since the part2 v1 course. After being an international fellow for the same, I realised I was able to spend 10 hours only a few weeks but not every week. It was difficult to make so much time and my immense respect to those who are able to do it while doing a full time job.

Exactly, a month ago, on 22nd of February, it struck me - why shouldn’t I take up this course in-person? I immediately applied to the programme and contacted the Data Institute about my candidature since I had ~20 days to receive admission, apply for a US Visa, figure out my office commitments etc. Infact, I gave up a week later on my in-person plans. I’ll take this moment to specially mention about @Moody who was instrumental in inspiring me to put my efforts on the visa and office commitments regardless of the admission result. She said “don’t hope for a miracle. Give your best efforts and activate the miracle.” A couple of days later, @jeremy posted about the study hours at the Data Institute and I figured this is my best chance to take the leap of faith against the visa timelines etc so that I could fly down, focus only on deep learning for the next 45 days and collaborate with all the amazing peers here.

I spoke to the managers at my workplace and they issued me a sabbatical. I applied to the visa and I was immediately granted one. I booked my flight tickets before I completing my visa interview and guess what, I received my passport a day before my flight. I pinged a couple of my friends in SF and they are ready to host me at their place. Wow. It felt as if the jigsaw puzzle was coming to life one-block-after-one and everything just fell in place. Today is 22nd of March and it’s exactly a month since I started this endeavour, and everything seems set!

Right now, I’m writing this sitting in a small airport in India, waiting for my flight to travel over the oceans - 13000km all the way to San Francisco! Just for this course and learning environment. :slight_smile: I’m so happy I’m able to make it to the campus. I wouldn’t have done any of this had it not been for Jeremy, @rachel, their team at USF and all the students’ efforts on the forums. You people truly make it a wonderful place to learn.

Now that the logistics have been figured out, I’ve made a few plans to make the most of this course.

  • I plan to regularly attend the study hours, learn with the peers and work on Jeremy’s coding exercises. :wink:
  • I plan to write about my deep learning work on my blog. I’ve only written about my previous ML experiences and hackathons but it’s time I write more on the DL aspects.
  • I plan to become much more active and involved in the forums and help as many students as I can to the best of my ability. I’m sure I’ll make mistakes but hey, “learning is free” in this process!
  • Work on a couple of high impact problems. I’d like to speak to as many peers as I can and take up a long-term project. Like a capstone. I personally think this is something that’s missing in fast.ai. A lot of us had questions like “what next after finishing class exercises?” Enforcing a capstone project wouldn’t be a bad idea IMHO. @All, any thoughts on this?
  • Most importantly, have fun along the way.

I look forward to meeting all of you soon.

Phani.

28 Likes

Hats off to you :star_struck: @binga!
I mean WOW, feeling so excited and happy to just even know that someone has done this :

Really a big leap of faith you took here and definitely it’s going to pay off. :sunglasses:

Now even I wish that I had the guts :thinking: and initiative to do something like this :crazy_face:

3 Likes

Now, that’s a bold and fun move. :clap: :cake: :champagne:

Congrats on getting all those logistics sorted! Looking forward to seeing you here at USF :slight_smile:

7 Likes

After today’s lecture (lesson9) it’s clear what pace is going to be maintained throughout this course. :crazy_face:
Hence it is important to start working on the exercises and readings from day one to retain our grasping pace.

Below is a rough list of all possible readings and resources for this week:

Research Papers:

  1. YOLO - https://pjreddie.com/media/files/papers/YOLOv3.pdf
  2. SSD - https://arxiv.org/pdf/1512.02325.pdf
  3. RetinNet - https://arxiv.org/abs/1708.02002
  4. MSC-MultiBox - https://arxiv.org/abs/1412.1441

Related Articles and Videos:

  1. Understanding SSD for real time object detection -
    https://towardsdatascience.com/understanding-ssd-multibox-real-time-object-detection-in-deep-learning-495ef744fab
  2. Understanding Anchors through Excel -
    https://docs.google.com/spreadsheets/d/1ci7KMggF-_4kv8zRTE0B_u7z-mbrKEzgvqXXKy4-KYQ/edit?usp=sharing
  3. Spatial Transforms -
    http://pytorch.org/tutorials/intermediate/spatial_transformer_tutorial.html
  4. RCNN CS231n -
    https://youtu.be/nDPWywWRIRo?list=PL3FW7Lu3i5JvHM8ljYj-zLfQRF3EO8sYv

Important Additional Readings:

  1. Understanding cyclic learning rate -
    https://arxiv.org/abs/1506.01186, http://forums.fast.ai/t/understanding-use-clr/13969
  2. Utilizing the efficiency of pandas as suggested by @binga in his notebook -
    https://gist.github.com/binga/336258dd5965e77df6b8744b87154164, https://tomaugspurger.github.io/modern-1-intro.html
  3. Pathlib understanding -
    http://pbpython.com/pathlib-intro.html
  4. Great resource to understand VAEs -
    https://towardsdatascience.com/intuitively-understanding-variational-autoencoders-1bfe67eb5daf

This list is in no manner exhaustive, so please add any additional readings/resources you find useful. :slight_smile:

Super charged after today’s lesson :star_struck:

What do you suggest our approach should be with respect to other video resources like the CS231n lecture above? Though they are great, but require a time investment which could be could also be spent implementing the models taught in today’s lesson. @jeremy

12 Likes

If you have some extra time, then these 3 papers would be a great starter kit for object detection being discussed in today’s lecture:

1 Like

So here I am, continuing my plan to post blogs every week on topics covered/related to lessons. Though this one is unrelated to todays or last weeks lesson, let me know your thoughts :slight_smile:

Convolutional Neural Network - II

1 Like

Would be really interesting to see professional grade equivalents of Jeremy’s notebooks in tf+keras.

1 Like

@snagpaul Yeah, I agree, but at this pace it’s going to be tough! Today’s lecture was quite packed by itself! I’m very intrigued by feature pyramids, though… Really looking forward to lectures 13-14, as usual those will be the best ones!
From my point of view (working in Computer Vision) today’s lecture has been quite the best of the Pytorch timeline! SSD+Yolo easily explained, debugged and implemented in 2.5 hours… Amazing.

Really envious of all of the people that are following in person! And, BTW, congratulations @binga for your incredible efforts!

I have been working through yesterday’s lesson and writing my own notes along side the notebook. Getting some sense of satisfaction on the conceptual level following this strategy :smile:

Have already worked through the notebook in my own pace and am able to absorb what and how things are being down, but unable to reproduce the same independently.

I have read @radek’s and other people posting about how much efforts and time it takes to reach that level of confidence, and I still find myself thinking about how to reach there a bit faster. :thinking:

Any pointers in addition to practice that you can suggest? @radek

“Every day I sit on a sofa toward my dream”

2 Likes

is all there is

3 Likes

Found an interesting collection dataset for deep learning:

2 Likes

Don’t use @ (people don’t like it)
Just pose a query and someone will surely respond to resolve

Will keep that in mind. Thanks :slight_smile:

There are good and bad uses of @ . In this case he was saying he liked another student’s writing, so it seems nice to at-mention that student so they know about the praise! :slight_smile:

3 Likes

So until today’s lecture, I was able to read the YOLO and SSD paper. Currently preparing a summary for the same, will be posting it soon on Medium. :smile:

For reading papers it is very useful if we set a goal for our reading in mind before starting out. The goal can be to understand the model architecture or to clarify any novel concept introduced in the paper. I found this 3 step strategy very efficient:

  1. Read the Abstract and Introduction carefully. Just skim over all other section headings and bold highlights. Do not even look at the expressions in the first go.
  2. Compare your initial impression of the paper by reading related blog posts and explanations online.
  3. Now give a thorough read to the paper by working through expressions and highlighting any major points which let you achieve your goal in a better manner.

I found this video by Siraj is really useful .

Alot more to cover after today’s lesson. Really tough to learn all the neat tricks and new ideas explored today, without having a good grip over NLP practice.

Planning to dedicate 1 whole day to practice NLP from basics using these resources:

Another go at the IMDB lesson 4 from Part I, will surely accelerate learning for this week. :slight_smile:

4 Likes

I just tried something that helped me understand Part2 v2 better - used data augmentation!

I started watching Part1 v2 videos as well, and having similar things told in two different ways helped me understand the concepts and generalize better. We can see how things have changed in the past year… why we’re doing what we’re doing now… and how we got here.

5 Likes