NIPS 2017 videos

https://nips.cc/Conferences/2017/Videos (if anybody is interested)

14 Likes

Some sessions are being live streamed here: https://www.facebook.com/nipsfoundation/

A live session just ended on “Deep Learning:Practice and Trends Tutorial” (recording available)

–

6 Likes

Yesterday a powerful talk was delivered by Ali Rahimi @ NIPS 2017. His 20 mins talk can be seen from 57th min onwards in this video:

Some of his statements which resonated with the community (and I am putting it here because I really want you all to watch his speech):

“We’re applying brittle optimization techniques to loss surfaces we don’t understand.”

“We like to say that machine learning is the new electricity. I would like to offer another analogy. Machine learning has become alchemy.”

“I miss the NIPS rigor police and wish they’d come back.”

“There is a self-congratulatory feeling in the air.”

My reflection:

Frankly, I did not understand most of the research he shared during the first half (have to build myself up to it), but got the message behind it. The second half was a strong call to action.

Also, I also do not have the depth to understand the gravity of the problem he has mentioned (i.e. lack of rigor in ML). Probably researchers in the coming weeks will share their perspectives on it.

His talk was mostly aimed at researchers. But I believe we practitioners also have a role to play to make ML more rigorous and safe. I am just not sure how.

In software engineering per say, years of practice has made its way into established design patterns, standard architectures, idioms and best practices. We are not there yet in AI. We are at a phase where cutting-edge research, right off-the-press, is making its way into products. And probably only a few people (or none?) on earth understand how they work.

I will happily work on projects which provide entertainment (style transfer?) or utility (search inside images?). But will probably lose some sleep if work on tech which affects people’s lives. I feel very uncomfortable with black boxes.

8 Likes

@rachel just told me I have to watch this too! She’s hoping we can write something about it.

I don’t at all agree that we can only make use of models that are amenable to theoretical scrutiny. If we create appropriate validation and test sets, and use tools such as the jackknife appropriate to get confidence intervals when we need them, along with simple approaches like getting the gradients with respect to the inputs to understand the main drivers in our model, then I think we’re heading in the right direction.

IMO, a over-reliance on theory held back machine learning for over a decade. I hope we don’t go back to the bad-old-days of SVMs.

14 Likes

Thanks @jeremy for your perspective! Looking forward to the writeup :slight_smile:

Hi Anand,
Can you please share the link. Thanks

I shared the facebook link above, today someone uploaded it to youtube:

2 Likes

Thanks :slight_smile:

1 Like

@anandsaha Thanks very much for sharing!

1 Like

@anandsaha ohh my bad, I mean to say link to this talk :slight_smile:

Aah sorry. Visit https://www.facebook.com/nipsfoundation/, keep scrolling down till you get “Live from NIPS 2017, Deep Learning:Practice and Trends Tutorial” and “Live from NIPS 2017, Deep Learning: Practice and Trends Tutorial. Part II”.

Happy watching!

Thanks a lot :slight_smile:

1 Like

Yann LeCun, writes his comments about this very aspect. https://www.facebook.com/yann.lecun/posts/10154938130592143

Quoting a few lines -

In the history of science and technology, the engineering artifacts have almost always preceded the theoretical understanding: the lens and the telescope preceded optics theory, the steam engine preceded thermodynamics, the airplane preceded flight aerodynamics, radio and data communication preceded information theory, the computer preceded computer science. Why? Because theorists will spontaneously study "simple" phenomena, and will not be enticed to study complex one until there a practical importance to it. Criticizing an entire community (and an incredibly successful on at that) for practicing "alchemy", simply because our current theoretical tools haven't caught up with our practice is dangerous. Why dangerous? It's exactly this kind of attitude that lead the ML community to abandon neural nets for over 10 years, *despite* ample empirical evidence that they worked very well in many situations.

And Ali Rahimi’s comments are so welcoming -

The problem is one of pedagogy. I'm asking for simple experiments and simple theorems so we can all communicate the insights without confusion. You've probably gotten so good at building deep models because you've run more experiments than almost any of us. Imagine the confusion of a newcomer to the field. What we do looks like magic because we don't talk in terms of small building blocks. We talk about entire models working as a whole. It's a mystifying onboarding process.

And all of us here are so glad to see fast.ai addressing this issue :slight_smile: and there are so many beneficiaries here due to your incredible efforts.

9 Likes

I think we should have some good books ( or perhaps better teacher like @jeremy) which explain intuition and then equations. Because once you get the intuition the mathematical derivation is mechanical. But most of the scientist talks their intuition in terms of equations. And worse even the mathematical notations are not uniform.
So I think what is lacking is a language to talk intuition. And for sure language of equations is not the best one.

3 Likes

Thanks for sharing this. This is a superb keynote, and quite encouraging for the ones working in the domain of software/data driven genome analysis. Will have to share this with my colleagues, the first thing today.

Damn ! This keynote really got my mind racing…

1 Like