Lesson 1 discussion

Unable to load the weights.
Getting the following error.

IOError Traceback (most recent call last)
in ()
----> 1 vgg = Vgg16()

C:\Users\Sai Kiran\courses\deeplearning1\nbs\vgg16.pyc in init(self)
31 def init(self):
32 self.FILE_PATH = ‘http://www.platform.ai/models/
—> 33 self.create()
34 self.get_classes()
35

C:\Users\Sai Kiran\courses\deeplearning1\nbs\vgg16.pyc in create(self)
81
82 fname = ‘vgg16.h5’
—> 83 model.load_weights(get_file(fname, self.FILE_PATH+fname, cache_subdir=‘models’))
84
85

C:\Users\Sai Kiran\Anaconda2\lib\site-packages\keras\engine\topology.pyc in load_weights(self, filepath, by_name)
2693 ‘’'
2694 import h5py
-> 2695 f = h5py.File(filepath, mode=‘r’)
2696 if ‘layer_names’ not in f.attrs and ‘model_weights’ in f:
2697 f = f[‘model_weights’]

C:\Users\Sai Kiran\Anaconda2\lib\site-packages\h5py_hl\files.py in init(self, name, mode, driver, libver, userblock_size, swmr, **kwds)
270
271 fapl = make_fapl(driver, libver, **kwds)
–> 272 fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
273
274 if swmr_support:

C:\Users\Sai Kiran\Anaconda2\lib\site-packages\h5py_hl\files.py in make_fid(name, mode, userblock_size, fapl, fcpl, swmr)
90 if swmr and swmr_support:
91 flags |= h5f.ACC_SWMR_READ
—> 92 fid = h5f.open(name, flags, fapl=fapl)
93 elif mode == ‘r+’:
94 fid = h5f.open(name, h5f.ACC_RDWR, fapl=fapl)

h5py_objects.pyx in h5py._objects.with_phil.wrapper (C:\Minonda\conda-bld\h5py_1474482483473\work\h5py_objects.c:2705)()

h5py_objects.pyx in h5py._objects.with_phil.wrapper (C:\Minonda\conda-bld\h5py_1474482483473\work\h5py_objects.c:2663)()

h5py\h5f.pyx in h5py.h5f.open (C:\Minonda\conda-bld\h5py_1474482483473\work\h5py\h5f.c:1951)()

IOError: Unable to open file (Truncated file: eof = 41140224, sblock->base_addr = 0, stored_eoa = 553482496)

Unable to load the weights.
Getting the following error.

IOError Traceback (most recent call last)
in ()
----> 1 vgg = Vgg16()

C:\Users\Sai Kiran\courses\deeplearning1\nbs\vgg16.pyc in init(self)
31 def init(self):
32 self.FILE_PATH = ‘http://www.platform.ai/models/
—> 33 self.create()
34 self.get_classes()
35

C:\Users\Sai Kiran\courses\deeplearning1\nbs\vgg16.pyc in create(self)
81
82 fname = ‘vgg16.h5’
—> 83 model.load_weights(get_file(fname, self.FILE_PATH+fname, cache_subdir=‘models’))
84
85

C:\Users\Sai Kiran\Anaconda2\lib\site-packages\keras\engine\topology.pyc in load_weights(self, filepath, by_name)
2693 ‘’'
2694 import h5py
-> 2695 f = h5py.File(filepath, mode=‘r’)
2696 if ‘layer_names’ not in f.attrs and ‘model_weights’ in f:
2697 f = f[‘model_weights’]

C:\Users\Sai Kiran\Anaconda2\lib\site-packages\h5py_hl\files.py in init(self, name, mode, driver, libver, userblock_size, swmr, **kwds)
270
271 fapl = make_fapl(driver, libver, **kwds)
–> 272 fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
273
274 if swmr_support:

C:\Users\Sai Kiran\Anaconda2\lib\site-packages\h5py_hl\files.py in make_fid(name, mode, userblock_size, fapl, fcpl, swmr)
90 if swmr and swmr_support:
91 flags |= h5f.ACC_SWMR_READ
—> 92 fid = h5f.open(name, flags, fapl=fapl)
93 elif mode == ‘r+’:
94 fid = h5f.open(name, h5f.ACC_RDWR, fapl=fapl)

h5py_objects.pyx in h5py.objects.withphil.wrapper (C:\Minonda\conda-bld\h5py_1474482483473\work\h5py_objects.c:2705)()

h5py_objects.pyx in h5py.objects.withphil.wrapper (C:\Minonda\conda-bld\h5py_1474482483473\work\h5py_objects.c:2663)()

h5py\h5f.pyx in h5py.h5f.open (C:\Minonda\conda-bld\h5py_1474482483473\work\h5py\h5f.c:1951)()

IOError: Unable to open file (Truncated file: eof = 41140224, sblock->base_addr = 0, stored_eoa = 553482496)

It looks like the vgg16.r5 file did not download properly

GOTO your home folder and if you have run keras program at least once there will be a .keras (hidden file) folder and in that folder there will be models, delete the file in that models folder. Either run the program again or manually download the file and put it in the models folder.

1 Like

FOUND THE SOLUTION:

The vgg16.r5 file did not download properly

GOTO your home folder and if you have run keras program at least once there will be a .keras (hidden file) folder and in that folder there will be models, delete the file in that models folder. Either run the program again or manually download the file and put it in the models folder.

1 Like

When running the notebook file on the downloaded dataset, I’m getting an accuracy of only around 50%, not 98-99%. The only change I made was, to avoid memory complications, reducing the batch size to 1. (Still runs in half-hour on my computer, and I’m fine with that speed).

Why is the accuracy so poor?

I am trying to submit my csv to Kaggle but I have a doubt about what to send.


In the image we can see that next command returns imgs and labels. The lecture says that imgs is a set of index but I could not understand what kind of index. For instance, the first number 175, I could not find it as part of file names in cats or dogs directory. If it is part of a counter, then the sample directory does not have 175 image files. In the imgs I have many ids, but only 4 labels. How to associate both? So, what this index (id) means?
Regards.

1 Like

In Kaggle competitions you can go to the data page and download an example of a submission to understand what Kaggle wants. The data page also describes the format of the submission.

In the dogs and cats competition, you’ll see that Kaggle wants two columns, one for the id and one for the label (0 or 1).

In the dogs and cats redux competition, you’ll see that Kaggle wants probabilities that the image is a dog.

The numbers you’ve displayed are pixel values, not IDs. I’m guessing that your batch size is 4, and so the imgs variable contains a list of four images (i.e. four 3D arrays of pixel values).

If you want the ID of each image, you’ll need to look in batches.filenames.

Here’s a mini-guide on making a CSV for a Kaggle competition.


P.S.

imgs[0] selected the first image, imgs[0][0] selected the first color channel of the first image, and imgs[0][0][0] selected the first row (or maybe column) of the first color channel of the first image. imgs[0][0][0][0] would select the first pixel value of the first row/column of the first color channel of the first image.

1 Like

@Mathew, thanks for the tips. Now I got the scenario.

Is it vgg16.r5 or vgg16.h5? I have vgg16.h5 in the directory and also I only see vgg16.h5 in www.platform.ai/models
Kindly confirm.

Yeah, its vgg16.h5, I misspelled.

1 Like

Hi,

My notebook kernel always dies at the following cell:

vgg = Vgg16()

Grab a few images at a time for training and validation.

NB: They must be in subdirectories named based on their category

batches = vgg.get_batches(path+‘train’, batch_size=batch_size)
val_batches = vgg.get_batches(path+‘valid’, batch_size=batch_size*2)
vgg.finetune(batches)
vgg.fit(batches, val_batches, nb_epoch=1)

How can I prevent this?

—EDIT—
More specifically, it crashes every time it loads the Vgg16 class, specifically when it adds a new layer to the model:

model.add(Lambda(vgg_preprocess, input_shape=(3,224,224))

—EDIT2—
So this seems to happen only with Theano. When switching Keras to TF, it doesn’t crash but I do get the following error after tweaking the code for compatibility:

ValueError: Dimension 1 in both shapes must be equal, but are 63 and 64 for ‘Assign_130’ (op: ‘Assign’) with input shapes: [128,63,3,3], [128,64,3,3].

Any ideas?

The model(sequential) by default gets category of data based on the sub directory structure, and learns a pattern. Otherwise you can specify one if you dont have sub directories

I had the same issue on a t2.large machine and solved it as follows:
pip install --upgrade keras

This updated keras from 1.1 to 1.2.
Then, I restarted the notebook kernel (in the NB menu bar: Kernel-> Restart) and everything worked fine.

1 Like

I’ve setup the p2.xlarge instance, and it’s working like a charm.

However, I’d like to setup a t2.micro server to do testing on, as Jeremy mentioned.

Can I run the setup_t2.sh script, replacing t2.xlarge with t2.micro, or will I have to make a new instance myself and manually install anaconda/theano/keras/jupyter notebooks/git? Thanks!

Hello,

I had the same problem today, and it seems that on my system (I’m running on my own MacBook Pro, mid 2016) it was due to the fact that the verision of Theano was too old. I updated it to v0.9 and it worked.

When you switched to keras, did you do the switching in the keras.json configuration file?

hope it can help!

Hi @jeremy,
I am new here and have been working on the lesson 1. Please forgive me if my questions are too silly, but I couldn’t find answers to those. I’ve got a few:
Q1. When you split the data into dogs and cats, first you move 2,000 random files and then split them. What I did was to move 1,000 cats and 1,000 dogs and then put them into folders. The question is: is it a good practice to have equal number of files classes for training?

Q2. In lesson 2 at about 27:50 timing you talk about first five probabilities being 0 and 1, which is not the case for me. Would this have to do anything with the way files are sorted?

Q3. I get the point of rounding down the edge predictions for Kaggle, however, this does not actually make our model better by itself. Am I missing something?

Q4. You use isCat in the Excel example. Kaggle’s function is asking to define if the image is a dog. Is your example in Excel the opposite of what’s Kaggle is asking or am I completely missing it?

Thank you!

1 Like

I am getting an error below when running log_loss

ValueError                                Traceback (most recent call last)
<ipython-input-36-9477dcf19a16> in <module>()
  8 
  9 x = [i*.0001 for i in range(1,10000)]
---> 10 y = [log_loss([1],[[i*.0001,1-(i*.0001)]],eps=1e-15) for i in range(1,10000,1)]
 11 
 12 plt.plot(x, y)

/home/ubuntu/anaconda2/lib/python2.7/site-packages/sklearn/metrics/classification.pyc in log_loss(y_true, y_pred, eps, normalize, sample_weight, labels)
   1620             raise ValueError('y_true contains only one label ({0}). Please '
   1621                              'provide the true labels explicitly through the '
-> 1622                              'labels argument.'.format(lb.classes_[0]))
   1623         else:
   1624             raise ValueError('The labels array needs to contain at least two '

ValueError: y_true contains only one label (1). Please provide the true labels explicitly through the labels argument.

This way, my sumission.csv file is not formatted properly. How can I fix log_loss() or is there another way to thix this issue? It looks like this format if affecting my score at Kaggle

id,label
1,0.95000
10,0.05000
100,0.05000
1000,0.95000
10000,0.95000
10001,0.05000
10002,0.05000
10003,0.95000
10004,0.95000
10005,0.05000
10006,0.05000

Question:

Kaggle-CLI …

Where can you find the competition name? I at first thought I created my own name, but then the ‘download’ command doesn’t make much sense…

$ kg download -u `username` -p `password` -c `competition`

@Josh.Zastrow

The competition name is the last segment of the URL of the competition page.

For example:
Competition URL: https://www.kaggle.com/c/dogs-vs-cats-redux-kernels-edition
Competition name: dogs-vs-cats-redux-kernels-edition

Note:
You must accept a competition’s rules to use kaggle-cli with that competition. To do this you must log in to kaggle.com, go to the competition’s data page, and click one of the download links. You’ll then see:

If you don’t accept the terms, kaggle-cli will download the HTML files of the competition’s rules page instead of the data.

1 Like

Instead of using ZeroPadding2D we can set the border_mode argument in Convolution2D to “same”.

For example, instead of:

ZeroPadding2D((1, 1)),
Convolution2D(64, 3, 3, activation="relu"),
ZeroPadding2D((1, 1)),
Convolution2D(64, 3, 3, activation="relu"),
MaxPooling2D()

we can write:

Convolution2D(64, 3, 3, border_mode="same", activation="relu"),
Convolution2D(64, 3, 3, border_mode="same", activation="relu"),
MaxPooling2D()

“same” refers to the fact that the output images will have the same shape as the input images. The default value for the border_mode argument is “valid”.