ICLR 2017 live stream on FB

1 Like

I have started to watch the recordings. I can highly recommend the talk about

deep learning requires rethinking generalization
https://www.facebook.com/iclr.cc/videos/1710657292296663/ (53 min)

The experiments show that well-known architectures (alexnet, Inceptions, etc) can be trained on random images and converge to train loss to almost 0. The deep models can learn any kind of structure - even random images. Deep model overfit on the training data, but for some reason they can generalize for the validation set.

I think this fact is interesting to know, when I build / evaluate my models.

The talk before is interesting, as well - what we can learn from linear models.