I have a problem on google cloud instance I cannot do git pull it say permission denied, not sure why, I am owner of the project and still not working. How to setup git on google cloud instance right so that it works and so that I can do git pull.
Thank you
@PoonamV Does that give me the freedom to first give the dimensions as 224 x 224 , find the learning rate corresponding to that from learn.lr_find() and then unfreeze the layers , change the dimension back again to whatever I want and then train the unfreezed layers? Because I feel that when the dimensions are reduced, a lot of information goes missing which may be very crucial for certain specific classifications…
create_cnn is now deprecated. learn = create_cnn(data, models.resnet34, metrics=error_rate)
should now be learn = cnn_learner(data, models.resnet34, metrics=error_rate)
Hi. I’m a bit confused about my resnet34 learning speeds. On a sagemaker p2.xlarge instance, running cell 13 of the notebook learn.fit_one_cycle(4), I’m finding each epoch takes about 1 minute. But in the lesson 1 video, it’s only shown as taking 29 seconds per epoch. I believe that the fastai docs suggest using this same p2.xlarge instance type, and Jeremy mentions a cost of $1/hour which is about right. If I upgrade to a p3.2xlarge instance I get times of around 32 seconds per epoch. Still slower than the lesson 1 video, and at a much higher cost of ~$4/hour. Any ideas why I should be seeing this performance gap?
data is your ImageDataBunch object.
data.valid_ds is referring to the validation dataset that is present in your ImageDataBunch object.
Similarly data.train_ds refers to the training data set.
Hi All
In regards to the “Results” section of the lesson 1 where we are checking the results,I was just wondering where we do (len(data.valid_ds)==len(losses)==len(idxs), if it comes true, does it mean all our training data sets items are in the top losses? if so doesn’t that mean our training was not worthy?