@sayko not to discourage you but have you seen this: https://www.alexirpan.com/2018/02/14/rl-hard.html
I had this urge as well but after reading this I decided to wait a year or two and look at the play between OpenAI and DeepMind from the bench :).
Iâll read it during this weekend, thanks! I also got this on my reading list:
I really liked David Ha âpaperâ:
and Large-Scale Study of Curiosity-Driven Learning paper:
This is interesting talk from Yann LeCun on history of DL and future, also talks about the role of RL in the mix. Spoiler alert.
He compared DL to cake where base is self supervised learning, cream is supervised learning and cherry on the top is RL.
For RL fast forward to 31:50sec
@sayko @Blanche @Michal_w @tillia @piotr.czapla @radek @Gaurav85 @Emsi @wojtekcz and everyone who wants to join us
our hangout video call is today at 20.00. see you all there! join us using this link
thank you @Blanche @tillia for the nice chat today. next week we will review all the great projects you mentioned today
Too bad I couldnât join you guys tonight. I was travelling
Yeah it was a good meetup. Next week same day same time
Sorry for not being present at the moment Iâve dived in into fast.text and somehow forgot to join⊠I will join you guys next time.
Btw. Iâm not sure if you have seen the talk with Leslie Smith and Jeremy, the link posted by jeremy was broken here is the proper one: https://www.youtube.com/watch?v=6N-WUQwG1Lw
Good interview. Shame audio wasnât betterâŠ
Is anyone interested in taking part in this hackathon in poznaĆ on 14.12? https://www.facebook.com/events/774439852904639/ We could form a team.
Iâm in Poznan, so count me in.
thanks for the nice chat about DL @Michal_w !
Did you have a hangouts meetup tonight, guys?
Yeah we had a nice chat with Michal. Did you join later? We had shorter than usual meetup tonight sorry⊠Next week will be talking for two hours at least
I forgot Was here almost 2 hours late
Next week Iâll ping everyone before the meetup
Hi,
anyone playing with human protein atlas competition dataset (from kaggle) on GCP? On âstandardâ machine - 8cpu, 52GB RAM + 1x tesla p4 - I got kind of CUDA âzaMaĆoMemoryExceptionâ when I try to use larger images to train my model ( Iâm starting from size 128, then train same model with larger images). The problem occures with Resnet50 (I think that I know why ;-)).
Do you have any experiences with different configuration (I consider using gs to store dataset+ model, and machine with 2x p4 tesla - not sure if it is âthe best solutionâ)?
To be clear - Iâm looking for optimal âcost/efficiencyâ solution
P4 has 8GB memory, k80 12 GB, p100 and v100 both 16 GB.
About âout of memoryâ effect
So far only worked for me
â decrease bs
â decrease size of images
â use lighter model like resnet34
Also sometimes it look it is most vital to pass through first steps and increase your GPU demand gradually.
for instance I couldnât train model on bs=32 always I was getting âout of memoryâ. I started from bs=4 and increased it 8>16>32>64 eventually i was able to train at bs=64 but this was on fastai v2 I presume it had to be some bug
M
I ended up with bs = 4 and out of memory with size 512 (resnet50)
Increasing GPU demand gradually didnât work. I know that resnet34 is lighter, but Iâm looking for the most efficient (AND affordable) solution to use resnet50
At the moment v100 and p100 are too expensive to become my âjust for funâ playground.
Considering 2x k80 vs 2x P4. I know that k80 configuration has more memory, but on the other hand (according to specs Iâve found in internet) it is less efficient than P4.
Did you (or anyone) compare it?