Reading + Peer Review Group

Hi All,

I’m interested in putting together a remote reading group and peer review group. If you’re running experiments, developing project proposals, abstracts, posters, or papers, and interested in getting feedback on your work, please let me know you’re interested! I hope that it will be a safe space and good opportunity to get feedback from peers before putting your work out into the world. With enough interest, I was thinking a slack channel or something similar

20 Likes

I would be great to have such a group in Fastai community.

1 Like

@matthewbay Thanks for the initiative!
May I suggest keeping this thread as the discussion and organization point over a slack community? The communities tend to be brittle in my experience.

I have been working on paper summaries, mostly for the papers that I have enjoyed reading. I’d love to help with the reading group :slight_smile:

4 Likes

Awesome! yeah that may make sense. I agree re: slack communities in retrospect.

1 Like

Yup Slack channels tend to kill these communities - using the forum can work great though.

4 Likes

Great idea! In the last few days I have been implementing (or trying to) the Unsupervised Data Augmentation and MixMatch papers. I still cannot share the code as I am doing it at work, but I should be able to do it soon enough and would love to get a few eyes on it.

What I have got so far (keeping in mind that I suspect everything will be much easier with the next release of fast.ai and when part 2 of the course is made public):

  1. Just completed a working version of the basic UDA cycle which should be usable with any classification task that has some kind of data augmentation.
  2. Got mid way through a working version of the Training Signal Annealing (from the UDA paper). Currently missing a modified version of the cross entropy loss. The main challenge here is finding a way of implementing it so that it also applies to the Mixup loss or a modified version of it (I hope to do some experiments with MixMatch and UDA at the same time).
  3. Did a few experiment on CIFAR-10 with 250 and 4000 labels. So far it looks like UDA helps making the training more stable and squeezing 1-2% points of improvements on error rates (but I am still miles away from the results in the paper: I am around 40% error rate at the moment with 4000 labels), it also seems to be a good regularisation technique.
  4. Half way through refactoring the code so that I can reuse all the supervised training code that is not specific to UDA for MixMatch and other experiments

What next?

  1. Finish implementing tsa
  2. Try to replicate the ablation studies in the UDA paper (there’s no way in hell I will manage to use Autoaugment, so I will have to try with simple cropping and flipping and see id I can get to 16-17% error rate).

Challenges:

  • Autoaugment. From what I understand, it requires way too much computing power. UDA seems to be heavily reliant on the augmentation policies.
  • In order to make good use of unsupervised training, I have to use good batch sizes on the unlabled examples, and I soon run into GPU memory problems with anything larger than WideResNet 28-2 (I also suspect I have a memory leak somewhere).
  • No mention of hyperparameters tuning in the paper, so some experiment will have to be done.

In order to address the memory problems, I am also thinking to split the unsupervised and supervised training into different cycles, NoGAN style and keeping the semisupervised training for the last bit.
Not sure if I’ll have the time though (I am still doing all of this at work, after all).

5 Likes

Fantastic idea, @matthewbay I’d be interested to take part in and contribute to such a group.

Awesome idea! I’m still a newbie in the fast.ai community and I would love to contribute to the community. If anyone wants to team up to implement an algorithm presented in a specific paper, i’m your man!

1 Like

So it kind of looks like the cat is out of the bag regarding the slack channel. There are ~50 people over there now. We’ll give it a shot an migrate back here if there isn’t movement. I do agree that this format might be better. My initial though was that slack might be better for chatting about articles. Anyway here’s the link for those who would like to join!: https://join.slack.com/t/mlpeerreview/shared_invite/enQtNjYwMjE2NDU4ODA2LTRlNGU1Njc1ZjNkOGJiMDJkZWVkNzk3MDMzNmIzNzUwOTQyNjg3MDhkNGU1MTIxY2Y2MGRiNDk5Yjg5NDlmOTA

1 Like