I’ve already read this blog post (just minutes ago) I’ll try that scripts this evening. Thx for help. Wish me luck…
You can get GCP with regular card (debit card) no credit card required. I’ve got up to watch the lecture and was half-awake most of the time, eeh.
When I run learn.unfreeze() it unfeezes all the layers from what I understand, but it doesn’t learn from scratch just adjusts the existing weights? Because when I do unfreeze, learn, save, weights and load them and then run the sequence again I have different error rate.
this is what i understand too. that is unfreezing doesn’t ‘reset’ pretrained weights (for this there is pretrained=False argument).
when you load weights and run more epochs, the error decreases? i think it should unless it overfits?
Yes it decreases, I think I will experiment more on dataset that is processed faster (I did that on quick draw, using Radek tutorial, thanks for that Radek).
@Blanche @Michal_w @sayko
was good meetup! thanks for joining today! however there was a huge echo on my side i could barely hear you today. i hope next week will be better!
maybe interesting for GCP users, additional $500 credit: https://forums.fast.ai/t/platform-gcp/27375/140
if someone is training of colab there is a easy way to use own dataset, if it is on Google Drive.
@sayko @Blanche @Michal_w @tillia @piotr.czapla @radek @Gaurav85 @Emsi @wojtekcz
our hangout video call is today at 20.00. see you all there! join us using this link
Anyone has link to Video Stream of lesson 3 ?
Michal
are you watching live?
@miwojc
https://forums.fast.ai/t/new-guide-for-easy-web-app-deployment/29616/2
I saw you asking about situation, where there are not only cats/dogs (example) but also not-cats-dogs, other category.
https://blogs.technet.microsoft.com/machinelearning/2018/05/01/how-to-develop-a-currency-detection-model-using-azure-machine-learning/
Here is example:
They added additional category ‘background’ and made it 5 times bigger then other categories.
“The backgrounds should be as diverse a set as possible.”
If someone wants to get into Reinforcement Learning (I do ) , OpenAI just launched https://spinningup.openai.com/ . It looks like a great resource.
jeremy just tweeted yesterday that someone beat fastai team’s DAWNBench CIFAR10 training record by factor of 2.5x, interesting read:
Thanks for sharing !!! Otherwise I would missed this and the article explains a lot !
@sayko not to discourage you but have you seen this: https://www.alexirpan.com/2018/02/14/rl-hard.html
I had this urge as well but after reading this I decided to wait a year or two and look at the play between OpenAI and DeepMind from the bench :).
I’ll read it during this weekend, thanks! I also got this on my reading list:
I really liked David Ha “paper”:
and Large-Scale Study of Curiosity-Driven Learning paper:
This is interesting talk from Yann LeCun on history of DL and future, also talks about the role of RL in the mix. Spoiler alert.
He compared DL to cake where base is self supervised learning, cream is supervised learning and cherry on the top is RL.
For RL fast forward to 31:50sec