Thanks very much, I have understood it now!
you need to take name from competition url https://www.kaggle.com/c/dog-breed-identification. dog-breed-identification in this case
thank you very much for your help,now i have downloaded the dataset and even used config all because you corrected me.
sir is it will be the same in AWS,or it will be different
could you please share your thoughts on it
I never used crestle and I did not understand why kaggle-cli did not work on python2.7 but I think there is no reasons why aws should be different. Try it and if it wont work, ask here on the forum.
How long does TTA on the test set normally take? For me takes almost 40 min, which I find too much, isnāt it?
Hi @jeremy, are you going to commit the ātmp_dog_breed.ipynbā notebook that you showed in the class? There are some very helpful notes in there that I hope to revisit in the future.
Iām in 75th place! Woot! My current score is 0.24177.
The progress bar bug seems to hit me on my home machine whenever I run code that uses it. It works the first time, then fails on any following cells. The progress prints out a million statements and then the training code just stops running (GPU at 0% usage).
Has anyone else experienced this happening frequently?
Resetting the kernel often isnāt much fun :-/
This happens often after I use lr_find. And always happens after an exception. I donāt see the problem other than those two situations.
I managed to jump from 85% accuracy to 93.5% accuracy just by switching from resnet34 -> resnext101_64!
You can jump to >95% accuracy by using nasnet.
How long did it take to train?
Don`t remember exactly but something like ~1-2 hours to precompute activations, and than with this approach + 30-60 minutes for 5 folds cross validation.
Ohh smartā¦so you precomputed activations for each of the folds in your 5 fold cv?
Good thing for dog breed we arenāt finetuningā¦I bet it would take forever to run it with learn.unfreeze()
And you got this result without data aug?
I precomputed them once: for train (train - 1 image), validation [1 image] and test. And than just changed indexes and never precompute activations once again.
In dogs vs cats competitions Bojan (the winner) told his best model was training ~ 1 week.
UPDTL @lgvaz it is a weighted average of resnet101_64, inception_v4 and nasnet. with each model i predicted with 5 fold cv with 5 different seeds (75 models in total).
OK I think I got it now and you were able to do this by following your steps you posted in this thread linked below right?
So basically you precomputed activations on all of the data (except one image) and then you just changed the indexes to split up train/validation setsā¦right? Although you had to create a custom function to be able to do this in fastai it seems
Almost exactly right, except for this:
I joined it back to train
data_x = np.vstack([val_act, act])
Oh I missed that haha, amazing, good work!!
Thatās really a lot of models⦠haha
I saw you talking in another topic about essembling methods, I tried to use logistic regression on top of my classifiers but it went really really bad⦠Anyways, you are calculating the weights based on the CV loss?