Thanks very much, I have understood it now!
you need to take name from competition url https://www.kaggle.com/c/dog-breed-identification. dog-breed-identification in this case
thank you very much for your help,now i have downloaded the dataset and even used config all because you corrected me.
sir is it will be the same in AWS,or it will be different
could you please share your thoughts on it
I never used crestle and I did not understand why kaggle-cli did not work on python2.7 but I think there is no reasons why aws should be different. Try it and if it wont work, ask here on the forum.
How long does TTA on the test set normally take? For me takes almost 40 min, which I find too much, isn’t it?
Hi @jeremy, are you going to commit the “tmp_dog_breed.ipynb” notebook that you showed in the class? There are some very helpful notes in there that I hope to revisit in the future.
I’m in 75th place! Woot! My current score is 0.24177.
The progress bar bug seems to hit me on my home machine whenever I run code that uses it. It works the first time, then fails on any following cells. The progress prints out a million statements and then the training code just stops running (GPU at 0% usage).
Has anyone else experienced this happening frequently?
Resetting the kernel often isn’t much fun :-/
This happens often after I use lr_find. And always happens after an exception. I don’t see the problem other than those two situations.
I managed to jump from 85% accuracy to 93.5% accuracy just by switching from resnet34 -> resnext101_64!
You can jump to >95% accuracy by using nasnet.
How long did it take to train?
Don`t remember exactly but something like ~1-2 hours to precompute activations, and than with this approach + 30-60 minutes for 5 folds cross validation.
Ohh smart…so you precomputed activations for each of the folds in your 5 fold cv?
Good thing for dog breed we aren’t finetuning…I bet it would take forever to run it with
And you got this result without data aug?
I precomputed them once: for train (train - 1 image), validation [1 image] and test. And than just changed indexes and never precompute activations once again.
In dogs vs cats competitions Bojan (the winner) told his best model was training ~ 1 week.
UPDTL @lgvaz it is a weighted average of resnet101_64, inception_v4 and nasnet. with each model i predicted with 5 fold cv with 5 different seeds (75 models in total).
OK I think I got it now and you were able to do this by following your steps you posted in this thread linked below right?
So basically you precomputed activations on all of the data (except one image) and then you just changed the indexes to split up train/validation sets…right? Although you had to create a custom function to be able to do this in fastai it seems
Almost exactly right, except for this:
I joined it back to train
data_x = np.vstack([val_act, act])
Oh I missed that haha, amazing, good work!!
That’s really a lot of models… haha
I saw you talking in another topic about essembling methods, I tried to use logistic regression on top of my classifiers but it went really really bad… Anyways, you are calculating the weights based on the CV loss?