Kaggle Iceberg Challenge Starter Kit (LB 0.33 Baseline)


(Apil Tamang) #3

Thanks @timlee
I kinda got left behind in the Dog Breed Challenge competition. Definitely will get on this.

@jeremy @timlee
Would you mind me giving it a stab to get the SeNet piece working? Also, what did you exactly mean to try get it (SeNet) to work? Did you mean simply to try integrate that to the fastai library? With a meagre attempt to do the same for VGG-16, I just might be able to do that. Or did you mean, literally train the SeNet on image-net and publish the weights, so we can have a pretrained model to work with?

Also, my GPU’s being sitting idle for some days and surely eager to crunch some data (think training on imagenet :slight_smile: )


(Kevin Bird) #4

No fastai student left behind. If you have any questions, pm me. I don’t have the best result, but I will do what I can do bring you up to speed.


(Apil Tamang) #5

@KevinB
Were you referring to me, by any chance…? thnx, in advance :slight_smile:

What kind of info did you want to share in general? Thanx.


(Kevin Bird) #6

Where are you having issues? This link is a good place to start: https://www.kaggle.com/orangutan/keras-vgg19-starter

Some people have had good luck trying different models. You can find the different models available near the top of the conv_learner.py file in the fastai directory.


(Apil Tamang) #7

@KevinB
Sorry… didn’t get my first post quite right. I was able to use VGG-16 just fine. Think Jeremy actually pushed in a feature update himself.
But will let you know what happens with the SegNet architecture.


(Jeremy Howard (Admin)) #8

Sounds great! Yup get it integrated, and try to make it perform well on CIFAR10, then use that pretrained model on icebergs!


Wiki: Fastai Library Feature Requests
(Kerem Turgutlu) #9

Hi, do we have a pretrained network on CIFAR-10 integrated in fastai. Or I can run https://github.com/kuangliu/pytorch-cifar on my server then save the model and then maybe it can be add.

I’ve been reading PyTorch forums, people seem to have issues with saving and loading models. Shouldn’t it be as easy as saving the trained model parameters/weights into a pickle like file as it’s mentioned in PyTorch then load and use it in any kind of environment such as fastai ?

And why don’t people just share those serialized files trained on different datasets with the best model ?

Thanks


(Jeremy Howard (Admin)) #10

We don’t have a pretrained CIFAR-10 model, but we’d love one - or many! So if you do train one or more of those models I’d be happy to host the weights on our web site.

“Why don’t people share the serialized files?” I have no idea - it’s a huge opportunity that no-one is taking advantage of, other than a few imagenet files. There should be pretrained nets available for satellite, medical (CT, MRI, etc), microsopic (cell) images, etc, but there aren’t any…

I haven’t heard of problems with saving and loading models on the whole, although I know that if you train on multiple GPUs you can’t load on a single CPU, and visa versa.


(Jeremy Howard (Admin)) #11

To help this along a bit, I just put CIFAR-10 in a fastai/keras compatible directory structure here: http://files.fast.ai/data/cifar10.tgz


(Gerardo Garcia) #12

@jeremy I modified the dogs & cats to accomplish this along with the “Starter Kit” Thanks @timlee
I think that the TTA is essential on this challenge.
But I think we need to take in consideration rotation of the images. not just 180 or 90 degrees like TTA does with the standard 4 options.

Do you guys have a method to do that?


(Rikiya Yamashita) #13

This is the very thing I’ve wanted to build for a while.


(Jeremy Howard (Admin)) #14

We do up to 10 degree rotations as well. Although apparently the iceberg dataset doesn’t play nicely with standard augmentations (according to folks on the kaggle forum).


(ecdrid) #15

got a .221 log loss using resnext 101_64.

What i find is that the overall loss is decreasing (0.5->.28) on increasing the parameters of

learner.fit()

lossgraph


(ecdrid) #17

Is there fast.ai equivalent to that of keras callbacks or automatic stop training


(James Requa) #18

Maybe you should keep training!
Also try using cycle_mult=2 or increasing cycle_len cause right now it looks like you are underfitting a bit.


(Kevin Bird) #19

It actually wasn’t meant to be a link, but I converted it to point to the github file. Hope that helps!


(ecdrid) #20

I tried to increase the 4 to 6 but it didn’t help…
Model is highly unprective…
Loss will increase if we do more…
Atleast in my case…


(Jeremy Howard (Admin)) #21

Yes. See sgdr.py for an example.


(ecdrid) #23

Is it LossRecorder.on_batch_end()?
Novice here…


(ecdrid) #24

learn.TTA(is_test=True)?
How long this line should take on an average?(no %time)
In my case I ain’t sure whether is running or stuck ?