Good readings 2019

Exploring Randomly Wired Neural Networks for Image Recognition

Paper:

And a good read:

1 Like

Efficient Learning of data augmentation policies
1000x Faster Data Augmentation


8 Likes

Selfie: Self-supervised Pretraining for Image Embedding

2 Likes

This is also seems interesting " Text-based Editing of Talking-head Video"

https://www.ohadf.com/projects/text-based-editing

1 Like

Very cool :slight_smile: Btw this is the associated paper which generated these melspectograms

2 Likes

Nothing to implement here, but some things worth thinking about. I posted about it on the forum already (without much response) but now this paper was picked up in Andrew Ngs weekly newsletter, so maybe some more people will think it’s worth having a look:

Hi this is an excellent thread!

I have a tons of interesting paper that I want to go through but I hesitate to post them so that I don’t flood the thread.

I was wondering however if anyone else is interested in domain adaptation (supervised or unsupervised). I am currently focusing on this area and for that reason I collected the latest CVPR papers on it that looked promising. I am going over them as we speak but I would love to work with others on trying to implement some of their ideas.

Let me know if this sounds interesting to any of you and we can maybe do a working group!

p.s. if the category of domain adaptation is interesting to this wiki let me know and I will post my review so far.

Kind regards,
Theodore.

1 Like

Hello @Gabriel_Syme , thank you for this post. Looking forward to reading your review findings on domain adaptation. thanks Hari

Sure!! domain adaptation could be interesting for someone. So please feel free to post the papers who liked most, possibly the most recent ones. And add your review as well. If they get many “likes” we will put in the wiki.

1 Like

At ICML workshop “Climate change: How can AI help?”

Andrew Ng will speak on “Tackling climate change challenges with AI through collaboration” livestreaming at 9:45 Pacific time!

2 Likes

I already posted in another thread the related paper but I’ll repost it here…

1 Like

Scheduled speakers for the workshop:
https://icml.cc/Conferences/2019/Schedule?showEvent=3507

Edit: link I posted previously to the recordings doesn’t work now

Hey, have any of you all seen this SciHive twitter. It’s a new free, open-source service (I’m not connected in any way) that allows you to read Arxiv papers and highlight stuff, comment, ask questions…etc. It looks really cool.

It has some nice features too like hovering over an acronym shows what it stands for, hovering over a reference shows the paper and name. I think it’d be incredible to have the papers we all read as individuals collectively annotated with questions, answers, additional resources…etc. Let me know if there’s a similar service you already use to do this as well. Cheers.

3 Likes

Winning solution for some of the FGVC challenges at CVPR2019, plus SotA on Stanford Cars.

http://openaccess.thecvf.com/content_CVPR_2019/papers/Chen_Destruction_and_Construction_Learning_for_Fine-Grained_Image_Recognition_CVPR_2019_paper.pdf

2 Likes

New SOTA on Imagenet… :open_mouth: 85.4% when pretraining on Instagram and finetuning on Imagenet.
Edit: I guess it’s from last year, but they just published the models?

An awesome visualization intro to Numpy: https://jalammar.github.io/visual-numpy

5 Likes

This fellow is a master in the art of visual information display. You can “read” and comprehend the article almost without even paying attention to the words. Thanks for posting, @Shubhajit !

1 Like

MMDetection: Open MMLab Detection Toolbox and Benchmark

MMDetection is an object detection toolbox that contains a rich set of object detection and instance segmentation methods as well as related components and modules. Authors claim that this toolbox is by far the most complete detection toolbox …
However, the toolbox and benchmark could provide a start to reimplement existing methods and develop your own new detectors.
worth taking a look :wink:

3 Likes

The Matrix Calculus You Need for Deep Learning, by Tim Parr and Jeremy Howard (revised, v3)
Abstract: “This paper is an attempt to explain all the matrix calculus you need in order to understand the training of deep neural networks. We assume no math knowledge beyond what you learned in calculus 1, and provide links to help you refresh the necessary math where needed. Note that you do not need to understand this material before you start learning to train and use deep learning in practice; rather, this material is for those who are already familiar with the basics of neural networks, and wish to deepen their understanding of the underlying math. Don’t worry if you get stuck at some point along the way—just go back and reread the previous section, and try writing down and working through some examples. And if you’re still stuck, we’re happy to answer your questions in the Theory category at this http URL. Note: There is a reference section at the end of the paper summarizing all the key matrix calculus rules and terminology discussed here. See related articles at this http URL

MMDetection is really something…I usually start with Torchvision to get something basic running, but then switch to either MMDetection or maskrcnn-benchmark when I want to improve results.