Study group Polska

@sayko not to discourage you but have you seen this: https://www.alexirpan.com/2018/02/14/rl-hard.html
I had this urge as well but after reading this I decided to wait a year or two and look at the play between OpenAI and DeepMind from the bench :).

1 Like

I’ll read it during this weekend, thanks! I also got this on my reading list:

I really liked David Ha “paper”:


and Large-Scale Study of Curiosity-Driven Learning paper:

1 Like

This is interesting talk from Yann LeCun on history of DL and future, also talks about the role of RL in the mix. Spoiler alert.

He compared DL to cake where base is self supervised learning, cream is supervised learning and cherry on the top is RL.

For RL fast forward to 31:50sec

@sayko @Blanche @Michal_w @tillia @piotr.czapla @radek @Gaurav85 @Emsi @wojtekcz and everyone who wants to join us :slight_smile:

our hangout video call is today at 20.00. see you all there! join us using this link

1 Like

thank you @Blanche @tillia for the nice chat today. :slight_smile: next week we will review all the great projects you mentioned today :slight_smile:

Too bad I couldn’t join you guys tonight. I was travelling :frowning:

Yeah it was a good meetup. Next week same day same time :slight_smile:

Sorry for not being present at the moment I’ve dived in into fast.text and somehow forgot to join
 I will join you guys next time.
Btw. I’m not sure if you have seen the talk with Leslie Smith and Jeremy, the link posted by jeremy was broken here is the proper one: https://www.youtube.com/watch?v=6N-WUQwG1Lw

3 Likes

Good interview. Shame audio wasn’t better


Is anyone interested in taking part in this hackathon in poznaƄ on 14.12? https://www.facebook.com/events/774439852904639/ We could form a team.

1 Like

I’m in Poznan, so count me in.

1 Like

thanks for the nice chat about DL @Michal_w !

Did you have a hangouts meetup tonight, guys?

Yeah we had a nice chat with Michal. Did you join later? We had shorter than usual meetup tonight sorry
 Next week will be talking for two hours at least :slight_smile:

1 Like

I forgot :frowning: Was here almost 2 hours late

1 Like

Next week I’ll ping everyone before the meetup :slight_smile:

2 Likes

Hi,
anyone playing with human protein atlas competition dataset (from kaggle) on GCP? On “standard” machine - 8cpu, 52GB RAM + 1x tesla p4 - I got kind of CUDA “zaMaƂoMemoryException” :wink: when I try to use larger images to train my model ( I’m starting from size 128, then train same model with larger images). The problem occures with Resnet50 (I think that I know why ;-)).
Do you have any experiences with different configuration (I consider using gs to store dataset+ model, and machine with 2x p4 tesla - not sure if it is “the best solution”)?

To be clear - I’m looking for optimal “cost/efficiency” solution :slight_smile:

P4 has 8GB memory, k80 12 GB, p100 and v100 both 16 GB.

About “out of memory” effect :slight_smile:

So far only worked for me
– decrease bs
– decrease size of images
– use lighter model like resnet34

Also sometimes it look it is most vital to pass through first steps and increase your GPU demand gradually.

for instance I couldn’t train model on bs=32 always I was getting “out of memory”. I started from bs=4 and increased it 8>16>32>64 eventually i was able to train at bs=64 but this was on fastai v2 I presume it had to be some bug :frowning:

M

I ended up with bs = 4 and out of memory with size 512 (resnet50) :wink:
Increasing GPU demand gradually didn’t work. I know that resnet34 is lighter, but I’m looking for the most efficient (AND affordable) solution to use resnet50 :slight_smile:
At the moment v100 and p100 are too expensive to become my “just for fun” playground.
Considering 2x k80 vs 2x P4. I know that k80 configuration has more memory, but on the other hand (according to specs I’ve found in internet) it is less efficient than P4.
Did you (or anyone) compare it?