Planet Classification Challenge

That’s right - but in this case, our images (satellite) are very different to what imagenet is trained on (standard photos), so I don’t expect to need to keep all the details of the pretrained weights.

1 Like

Yes it depends what it was originally trained on. We don’t have to use the same size it was trained on, but sometimes you get better results if you do.

1 Like

Yes, changing size is designed to avoid overfitting.

set_data doesn’t change the model at all. It just gives it new data to train with.

Once the input size is reasonably big, the preprocessing no longer is the bottleneck, so resizing to 128x128 or larger doesn’t really help.

6 Likes

Thanks @jeremy. Taking your responses together with the fact that we freeze the convolutional layers when we change the image sizes with set_data, train them, and then unfreeze the layers and then train again, suggest that we are still worried about the impact of different image sizes on the weights in the convolutional layer, even though these images are significantly different from ImageNet.

Is the thinking behind the freezing and unfreezing when we change sizes that when you change the sizes of the images, the weights in the fully connected layers, although they aren’t random anymore, really should be tuned to the new image sizes before we unfreeze and train the convolutional layers on the new image sizes? Is this something you just learned from trial and error or is there a theory you can articulate behind this?

I get that you shouldn’t unfreeze the convolutional layers when the fully-connected layers are initially random, but I guess I’m having trouble getting comfortable extending that insight to when we’ve already trained the fully connected layers, albeit on differently sized images.

Thanks for this. I am trying to submit my first submission for this competition but a bit confused about learn.TTA(). Its Docstring indicates that the outputs are log_preds but you are treating them as probs, since later you compare them with threshold. And yet it seems you are getting great results. Why is that?

@layla.tadjpour
There are few changes made to the learn.TTA() by Jeremy few days back…
Have a look there and a search in the forum might help…

This link might help…

1 Like

Thanks. I looked at the link you provided but it does not seem to be related to what I was asking! An any rate, do you remember if the output of learn.TTA() was log_pred or prob when you posted the above link 16 days ago?

TTA now returns class probability for each n_aug so you need to:

log_preds,y = learn.TTA()
preds = np.mean(np.exp(log_preds),0)

This should work…

2 Likes

I’m getting predictions with learn.TTA(), but I’m getting strange results. I take the mean just like in lesson2, but I get very bad f2 metric on validation set unless I use thrshold of 1.21. Does this mean that instead of logs of predictions TTA returns log§ + 1 ?

Since we can’t see the code where you use TTA, it’s hard to know what’s happening here…

right, sorry. This is how I get predictions:

Looks fine - something else is going on in your model… I don’t think it’s specific to TTA.

I see. Thank you so much. I was wondering why I am getting a three dimensional output but did not realize the code had been changed.

I just asked the same question here: bn_unfreeze(True) @ecdrid - Did you figure out how they work differently?

Have you resolved this issue? I was getting similar results (0.48 accuracy). It seems that the newer version of fastai.learn.TTA() does output probabilities, not log_probs. So instead of taking the exp and then the mean, I just took the mean (raw_preds = learn.TTA(), preds=np.mean(raw_preds,0) ) and then accuracy results improved to 0.92.
@jeremy can you comment on this?

Also, I have 16 G RAM but whenever I try to run learn.TTA(is_test=True), my kernel dies. I have tried adding up to 48 G of swapfiles to my NVMe SSD drive but it won’t help. Any suggestion?

In this particular competition I had reduced my image size to 300x300 and then it didn’t run out of memory…

I followed jeremy’s notebook so the image size was 256. Isn’t the case that 300x300 take up more memeory?

I don’t believe I’ve done anything like that- but check the source to be sure

The original images are 256x256 so no point using bigger

I’m getting this error when creating the kaggle submission file:

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-48-0d8cb23506ec> in <module>()
      3 
      4 for i in range(len(test_fnames)):
----> 5     test_fnames[i] = test_fnames[i].split("/")[1].split(".")[0]
      6 
      7 classes = np.array(data.classes, dtype=str)

IndexError: list index out of range

Here is the number of classes:


Out[53]:
['agriculture',
 'artisinal_mine',
 'bare_ground',
 'blooming',
 'blow_down',
 'clear',
 'cloudy',
 'conventional_mine',
 'cultivation',
 'habitation',
 'haze',
 'partly_cloudy',
 'primary',
 'road',
 'selective_logging',
 'slash_burn',
 'water']
size = len(data.test_ds.fnames)
size
1000

This is the code I am using the same as listed above in this thread:

#create list for Kaggle submission
test_fnames = data.test_ds.fnames

for i in range(len(test_fnames)):
    test_fnames[i] = test_fnames[i].split("/")[1].split(".")[0]

classes = np.array(data.classes, dtype=str)
res = [" ".join(classes[np.where(pp > 0.2)]) for pp in tta[0]] 

submission = pd.DataFrame(data=res)

submission.columns = ["tags"]

submission.insert(0, 'image_name', test_fnames)

submission.to_csv(PATH+"Planet_submission_2017_12_18_01.csv", index=False)

What am i doing wrong?