Planet Classification Challenge

(Nafiz Hamid) #62

Do you have the latest code from fastai repo? I would suggest doing a git pull, and see if it goes away. Jeremy seemed to mention that he got rid of the Opencv library which is in your error.

(Vikrant Behal) #63

I’m on the latest code. BTW, Jeremy brought back opencv :slight_smile:

I’ll try to restart AWS and see if that helps. I had previously tried restarting kernel.

(Vikrant Behal) #64

Restartring aws did help.

(Nafiz Hamid) #65

That is great. Glad it worked.

(Vikrant Behal) #66

We resized images in lesson 2 notebook while size was 64:
data = data.resize(int(sz*1.3), 'tmp')

But for size 128, 258 we are providing the new set of data and not resizing for them. Any insight on this?

(WG) #67

I was just about to ask the same question.

We do …

img_sz = 64
data = get_data(arch, img_sz, val_idxs)
data = data.resize(int(img_sz * 1.3), 'tmp') # this creates /tmp/83

… and then resize to 128 and then to 256.

BUT, if you look in the file system, you’ll only see a tmp/83 folder with your resized images from the above line of code. It seems that when we resize to 128 we are resizing the previously downsized images we saved as 64x64 images … and also when we resize to 256, that we are again resizing from the 64x64 images.

Is that right?

If it is, for some reason, it feels wrong to be building bigger images from previously downsized images instead of using the original sizes to do the 128 and 256 sized images.

(Jeremy Howard (Admin)) #68

Actually looking again, that’s not what we’re doing - we’re creating the dataset again from scratch, not using the resized images. So I think it’s fine.

(WG) #69

Ok … that makes sense looking at the code again.

I take it then that the call to resize to 128 and 256 acts against the original sized images in this case.

If on the otherhand we didn’t make another call to get_data(), we would have upscaled the 64x64 images to 128 and 256.

(Jeremy Howard (Admin)) #70

Exactly right.

(Vikrant Behal) #71

Should I spin p3.xlarge?

lesson2, last step is taking ~2 hours! :frowning:

(ecdrid) #72

Try Spot Instances…

(Vikrant Behal) #73

What is the total number of items in your test folder?
Test count mismatch:

I’ve 40669 images:

While I try to submit Kaggle says:

(James Requa) #74

@vikbehal For this competition, there is an additional test set folder test-jpg-additional.tar.7z one option is to consolidate the images from both test set folders into one folder. You can refer to the data at the kaggle competition website for details.

(Miguel Perez Michaus) #75

@binga, in your code you have data augmentation + precompute=true… so tfms is ignored, isn’t it? (don’t know if its what you intended).

I have been able to “reproduce your reproducibility” :wink:, but only if precompute= true. With precompute = false not getting same results, even if I paste all three lines of seed code before beginning of each line of code.

¿Have you managed to achieve reproducibility with precompute=false?

(urmil) #76

P3 instance did not work for me. I think it is CUDA version issue. Let me know if you get it to work.

(Vikrant Behal) #77

That’s weird. My knowledge is limited but shouldn’t p2 or p3 if use fastai ami, should run without any issue.

(ecdrid) #78

Jeremy said that ami supports p2 instances only…(if i can remember)

(Phani Srikanth) #79

Yes, with data augmentation + precompute = True, tfms is ignored. And, I think I didn’t intend to do it. However, as I think twice, maybe I wanted to start with a couple of epochs only training the final layer and then turn off precompute, start augmentations and tweak initial layers. Damn! I missed this point while I built my network. These models always teach us something more and we keep trying :smiley:

However, let me try again with precompute=False and get back to you.

@uvs The ami wouldn’t work with P3 since P3 instances with Volta GPUs need CUDA 9 and the AMI that @jeremy built for us contains CUDA 8 IIRC.

Edit: Striking off incorrect details.

(Jeremy Howard (Admin)) #80

Our AMI does use CUDA 9, but I believe p3 requires a separate AMI. However, you can easily create your own, by using the Amazon deep learning AMI, installing anaconda, cloning fastai repo, and doing conda env update.

(Phani Srikanth) #81

Oops, didn’t realize that ! Apologies.