State of Reinforcement Learning

(Lankinen) #1

Jeremy has taught us a lot of different ways to use deep learning to make world-class results in different kind of tasks. I assume that some of the students have heard about reinforcement learning and maybe some of the students even know more about it. I know that Fast.ai library isn’t supporting any reinforcement learning techniques but I still think it is good to have a discussion about it to fully understand the state of machine learning. I also think it is something many current students should study after these lessons if they are more interested in AI.

  1. Can we even compare supervised learning and reinforcement learning? Or are those like supervised learning and unsupervised learning which doesn’t even solve the same kind of problems.
  2. Currently, RL is making state of the art results in many areas. People get slightly better results in areas where supervised learning models have been dominating for ages. Is this sign of RL starting to dominate these areas or just RL used to fine tone hyper-parameters? Also, in lesson 4 Jeremy said that he used random tree forest to find the best learning rates but could we get better results with RL?
  3. RL is great for games. There are also some other areas where there are specific state, action, and reward. In these areas, RL is producing massively better results. Is it plausible that supervised learning could someday be a better approach for these kind of problems?
  4. Then there is also problems where RL doesn’t do as well as supervised learning or at least it is producing the same kind of results. Is RL something which we will be using same way we use nowadays deep learning almost everytime instead of SVG or other outdated things? So is it plausible that the whole supervised learning area could someday become outdated?

I don’t assume that we should stop watching these amazing videos just because in future there might be something better but I’m rather trying to understand is RL something worth to learn after this course ends.

2 Likes

Lesson 4 Advanced Discussion ✅
Dev Projects General Discussion
(Ilia) #2

I think we can make a try to somehow adapt the library’s techniques to Open AI tools and recently released Facebook Horizon. I believe that Supervised Learning and Reinforcement Learning are very well playing together. Just remember Alpha Go or DQN algorithms, they both use previous experience and cyclic buffers to reduce jitter and improve system’s quality. Also, there are algorithms that use human’s replay to learn optimal policies.

Probably Jeremy or people from fastai team and other study groups could help us as well.

Should we create a study group for Reinforcement Learning? Actually, it is one of my favorite things in AI and the main reason why I am studying Deep Learning.

Would be great to find people interested in RL. We can try to deal with Open AI Gym/Universe environments, or VisDoom, or any other RL laboratories.

2 Likes

(Danielh Carranza) #3

I am really interested in learning more about DRL, it would be great if we create that study group :smiley:

1 Like

(Lankinen) #4

Great to hear that! I acctualy just created the topic for us.

0 Likes

#5

@Lankinen @ingbiodanielh @devforfu are you guys still interested in RL Study group? I was going to start reading the second edition of the Sutton/Barto book and was wondering if people could join me in weekly discussions.

2 Likes

(Davide Boschetto) #6

For a first theoretical approach, after fastai, I’d do (and I did!) this course: https://www.youtube.com/playlist?list=PLqYmG7hTraZDNJre23vqCGIVpfZ_K2RZs

Deepmind meets UCL. It’s great quality.

3 Likes

(Lankinen) #7

I would be interested but I have a lot of things to do. Maybe if you can arrange these I can join sometimes. But do you think that the book is good material? If it is then we should make some schedules how much read weekly and the discuss about those.

0 Likes

(Lankinen) #8

I created this slack channel for us a long time ago but never shared.
https://join.slack.com/t/rl-studygroup/shared_invite/enQtNTM0MTEzMzE2MTEyLTM0ODg5OTM5NGI3NzgyMWUyODdkNmQ5ODljNTY1NzdkZWM3YzQyZDRjOTI3YzQzNWRiOTI5ODk0MjUyOWQxNWI

This forum is great for bigger posts but we can discuss more random stuff here.

0 Likes

#9

@Lankinen I read the first 9 chapters of the first edition of the book (without completing the exercises) back in 2015. I really liked it. I have read the preface of the second edition and it seems like they have really revamped and updated the content. So I think it will be a very good read.
I like the idea of slack too. I joined the group.

The second edition is freely available and it also has a Github repo with implementations of all the algorithms described in the book.

@DavideBoschetto Thank you. This seems the latest one I guess. I was actually planning on following what Karpathy wrote in his blog

I worked through Richard Sutton’s book, read through David Silver’s course, watched John Schulmann’s lectures, wrote an RL library in Javascript, over the summer interned at DeepMind working in the DeepRL group, and most recently pitched in a little with the design/development of OpenAI Gym, a new RL benchmarking toolkit.

Well, except the last three bits :wink:

2 Likes