Kaggle Iceberg Challenge Starter Kit (LB 0.33 Baseline)

@jeremy
I was looking at this code
https://www.kaggle.com/solomonk/statoil-csv-pytorch-senet-ensemble-lb-0-1582
and looks like this guy is creating an “ensemble”

Is this guy using multiple models and then put in them together based on the results?
This ensemble looks like multiple models and then select the best answers
Could you please elaborate on this methodology?

@gerardo, start out by searching this forum for ‘ensemble’, and you’ll find lots of info and links! Let us know if you have any follow up questions after taking a look.

https://www.kaggle.com/timolee/fastai-kaggle-starter-kit/
and
https://www.kaggle.com/grroverpr/cnn-using-fastai-lb-0-209422

Links are not working…

I am just catching up on iceberg challenge

If you search for the titles on kaggle you can find them - the LB scores have changed:

Should we resize our image to 224x224

Where the original size is 75x75 only?

Because I want to try VGG out in this competition…

Have already tried ResNet 34, ResNet 50, ResNet 101, Resxnet also…

Created an average of all and couldn’t improve

beyond .2 logloss…

We can try…for me it takes way too much time …more than 8 hour to complete when using Resxnet64_2…

Also I don’t know why is inception_4 ~60% accuracy) performing so badly on this one where as my friends have got a log loss of around .16…

Something related to removing the layers?

Can some explain this anamoly…?

While trying to fit inception v_4 on this dataset,
After enabling unfreeze and bn_freeze…(tfms,max zoom,centre crop were all used)
Also added few FC layer with dropouts…

Specifically I added 4 FC layers…

  • 1024,256,64,16

Some details after doing so…

  • The validation loss seems to be of order 1e6(and increasing)
  • Training loss is around .4.
  • Accuracy is around .7…

How do I explain this ??

Deleted that .ipynb…
Will try to re create the anamoly…
@ramesh

Isn’t the analogy correct that bigger the model is, better will be the results?

Also can we have multiple tfms applied as if we

pass the transformation as a list …

And it will automatically do random transformations

on the images?

This is not always true. Smaller networks could train faster, so if you run for 10 epochs, smaller network might to a loss value that’s lower than a larger network. Whereas if you run for 100 epochs, the larger network, which has lot more parameters should get to a lower Training Loss.

It’s hard to tell without looking at the code. If you can put your code in gist.github.com, one of us can take a look and suggest next steps.

Will try to replicate again

Followed your kernel. Getting this error -

  1. Unable to plot after doing a learn.fit

2.During the press get this error. i checked ice_preds also but couldn’t get what went wrong

Could you please help.

Actually it’s very old now…
Haven’t updated it to the latest fast.ai
Regarding the plot you can clip and replot.(answer is in the forum some where I don’t remember)
Sorry…