Part 1, online study group

People are mostly in Lectures 1, 2, 3,4. During the meetup everyone gives a short summary about the progress they have made during the week (lecture, personal project or group project) and we discuss interesting points. So different lectures get covered. Which lecture are you doing now?

I would like to join, just finished watching lecture 1 and part of lecture 2 on YouTube. My time in front of a computer is limited outside of work, so it would be nice to be in a group so I can push myself.

I just got done building a PC with a GPU (1060 GTX) so I can run some DL. Currently it has Windows 10 pro and I was going to install pycharm, Anaconda and Pytorch. I’ve got both a 512 GB and a 1TB NVMe drive. I’d like to save $ and use that system for this course if at all possible. If someone could message me about what’s a good set of instructions for that. Otherwise I’m going to assume I need to use one of the server services mentioned in the course.

My education is as a physicist but my day job is a control systems engineer. I’ve got a few specific domains I’m interested in applying DL to.

1 Like

Sounds great! I use Google Collab, because it is free and it’s GPU is enough for practising first lessons :slightly_smiling_face: What domains are you interested in?

There is a meetup today at 3PM GMT.

Join Zoom Meeting:
https://zoom.us/j/226775879

The meetup is on =)

Meeting Minutes (14/12/2019)

  • Participants shared & discussed their Kaggle Kernel submission for Kannada MNIST competition along with Q&A
  • Discussion about lesson 1-pets

Advice

  • Make it work and try to make the first submission before shooting for higher scores.
  • Optimize for iterating faster so that you can test your ideas quickly.

Common mistakes to avoid for Kannada MNIST

Databunch creation

  • Not setting the batch size
  • Not setting the normalize part to imagenet_stats when you are using pretrained model
  • Not setting the random seed

Model

  • Start with the simplest model like restnet18, resnet34 if you are using pretrained models before trying with larger models.

Evaluation

  • learn.recorder.plot(suggestion=True) after learn.lr_find for setting the learning rate.
  • Train a bit longer (increase the epochs) if the training loss is much higher than the validation loss

Resources

4 Likes

Hello,

Great initiative! I would love to join next week. Could you please send me an invite?

Thanks!

1 Like

@JanM I think you will find the next meeting in this thread also. It’ll probably be on Saturday the 21th. The meeting might be moved to 2PM GMT. But if you come around a day earlier all the info here should be up to date.

2 Likes

Hi Jan,

I updated the thread, the next meetup is on coming Saturday at 2pm GMT and you are very welcome to join! There is also a slack channel here.

2 Likes

:santa: Ho ho ho there is a meetup today at 2PM GMT! :christmas_tree:

Meeting Minutes 21-12-2019

  • New Participants
  • IMDB Movie Review Walkthrough by @msivanes
  • MoviePoster MultiLabel Classification project by @Jan
  • Tabular Data Walkthrough by @gagan

Notes

  • Use a smaller sample set before diving in to the full dataset
  • Make sure you store the vocab and fine_tuned_enc when create your language model.
  • To go to the source code -> learn.recorder.plot_losses??

Questions

  • How to determine the layers when creating TabularLearner?

Resources

2 Likes

I wanna join the meetup

1 Like

@lesscomfortable I’m a little confused about how fitting works? suppose I ran train model for 3 epochs first and then 2 epochs . is that equals to running 5 epochs in one go?

The meetup is on now! Everyone is welcome to join! :grinning:

Here is the link for the meeting: https://zoom.us/j/226775879

1 Like

#Meeting Minutes 04-01-2020 (Thanks @msivanes for the inputs)

## Notes

  • New Participants
  • Walkthrough of notebook on Yelp Reviews to explore how fine tuning helps in handling Out of Vocabulary(OOV) words in language model by @msivanes. Words that do not appear in the wiki text & very specific to our domain are initialized with random weights & they are learned as part of fine tuning. This was based on learning from notebook[1] created by @pabloc. For more discussion see [2].

## Advice

  • Use a smaller sample of the dataset before diving into full dataset. This allows for faster training & quicker iteration.

## Questions

  • How to override the 60000 limit on vocab while creating Language Model?
  • When we freeze a model for fine-tuning, do the layers become untrainable or the layer-groups?

## Off topic

4 Likes

We decided to rotate the presentation of the lessons among us. As you know, our meetings are informal, so this is basically like explaining the material to friends and boosting your presentation skills in a supportive environment :slightly_smiling_face: For a learner, it is one of the best ways to actively engage with the material and actually learn better by explaining. So choose the lesson that you want to understand better yourself. The lesson’s recap should be short (~15-20 mins) covering main concepts in a simple language. So grab a chance and please write which lesson you would like to present :dancer: Of course, all newcomers are welcome :grinning:

3 Likes

I started the lessons a couple days ago. just came across this thread! Would love to join the next session and learn from everyone. Thank you for this initiative! :slight_smile:

1 Like

Just a reminder, the meetup will be held today in ~1h :grinning:

1 Like

The meetup is on :grinning: