Dog Breed Identification challenge


(Chris Palmer) #264

Thanks very much, I have understood it now! :joy:


(sergii makarevych) #265

you need to take name from competition url https://www.kaggle.com/c/dog-breed-identification. dog-breed-identification in this case


(naveen manwani) #266

thank you very much for your help,now i have downloaded the dataset and even used config all because you corrected me.
sir is it will be the same in AWS,or it will be different
could you please share your thoughts on it


(sergii makarevych) #267

I never used crestle and I did not understand why kaggle-cli did not work on python2.7 but I think there is no reasons why aws should be different. Try it and if it wont work, ask here on the forum.


(Stathis Fotiadis) #268

How long does TTA on the test set normally take? For me takes almost 40 min, which I find too much, isn’t it?


#269

Hi @jeremy, are you going to commit the “tmp_dog_breed.ipynb” notebook that you showed in the class? There are some very helpful notes in there that I hope to revisit in the future.


(Brian Muhia) #270

I’m in 75th place! Woot! My current score is 0.24177.


(Rob H) #271

The progress bar bug seems to hit me on my home machine whenever I run code that uses it. It works the first time, then fails on any following cells. The progress prints out a million statements and then the training code just stops running (GPU at 0% usage).

Has anyone else experienced this happening frequently?

Resetting the kernel often isn’t much fun :-/


(Jeremy Howard) #272

This happens often after I use lr_find. And always happens after an exception. I don’t see the problem other than those two situations.


(Traun Leyden) #273

I managed to jump from 85% accuracy to 93.5% accuracy just by switching from resnet34 -> resnext101_64!


(sergii makarevych) #274

You can jump to >95% accuracy by using nasnet.


(James Requa) #275

How long did it take to train?


(sergii makarevych) #276

Don`t remember exactly but something like ~1-2 hours to precompute activations, and than with this approach + 30-60 minutes for 5 folds cross validation.


(James Requa) #277

Ohh smart…so you precomputed activations for each of the folds in your 5 fold cv?

Good thing for dog breed we aren’t finetuning…I bet it would take forever to run it with learn.unfreeze()


(Lucas Goulart Vazquez) #278

And you got this result without data aug?


(sergii makarevych) #279

I precomputed them once: for train (train - 1 image), validation [1 image] and test. And than just changed indexes and never precompute activations once again.

In dogs vs cats competitions Bojan (the winner) told his best model was training ~ 1 week.

UPDTL @lgvaz it is a weighted average of resnet101_64, inception_v4 and nasnet. with each model i predicted with 5 fold cv with 5 different seeds (75 models in total).


(James Requa) #280

OK I think I got it now and you were able to do this by following your steps you posted in this thread linked below right?

So basically you precomputed activations on all of the data (except one image) and then you just changed the indexes to split up train/validation sets…right? Although you had to create a custom function to be able to do this in fastai it seems :slight_smile:


(sergii makarevych) #281

Almost exactly right, except for this:

I joined it back to train :yum:

data_x = np.vstack([val_act, act])


(James Requa) #282

Oh I missed that haha, amazing, good work!!


(Lucas Goulart Vazquez) #283

That’s really a lot of models… haha

I saw you talking in another topic about essembling methods, I tried to use logistic regression on top of my classifiers but it went really really bad… Anyways, you are calculating the weights based on the CV loss?