Lesson 1 discussion


(Dejan Mircevski) #408

@geniusgeek, please comment on the linked pull request, as that’s the appropriate place to discuss whether that code change should happen or not. (Also make sure to read the carefully written justification there.)


(Liberty) #409

Could someone please further explain what ‘preds’ does in this line of code?

test_batches, preds = vgg.test(path + 'test', batch_size=batch_size*2)

(Himanshu) #410

Hi Everyone,
@jeremy thanks for this wonderful MOOC. I learned much more in just 2 weeks with lesson 1 and 2 than I learned from the internet in the last 1.5 months. Really appreciate what you are doing.

Here is my first question.
I have completed Lesson 1 and successfully submitted Cats vs Dogs Redux on Kaggle.
Now I am working on Leaf Classification competition (https://www.kaggle.com/c/leaf-classification).
There are 2 datasets. One is in .csv (pre extracted features) and another is actual images (only 990 for 99 classes). So far I made a MLP (3 hidden layers) with pre extracted features (.csv) and achieved 96+ accuracy on validation. Then I made a simple CNN for image dataset which gives very poor results (~2% accuracy).
Now I am trying to combine MLP and CNN so that I can use both .csv data and image data in a single model. I used keras Merge (https://keras.io/getting-started/sequential-model-guide/#the-merge-layer) to combine the 2 model, but when I run .fit then the combined model gives accuracy of around 1% on validation. Combined model should atleast give accuracy for MLP.
I don’t know where I am going wrong.
Help is much appreciated.

image_model = Sequential()

image_model.add(ZeroPadding2D(padding=(1,1), input_shape=(n_color_channel, image_height, image_width)))
image_model.add(Convolution2D(8,5,5, activation='relu'))
image_model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))

image_model.add(ZeroPadding2D(padding=(1,1)))
image_model.add(Convolution2D(32,5,5, activation='relu'))
image_model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))

image_model.add(Flatten())

numeric_model = Sequential()
numeric_model.add(Dense(1024, activation='relu', input_shape=(192,)))
numeric_model.add(Dropout(0.25))
numeric_model.add(Dense(512, activation='relu'))
numeric_model.add(Dropout(0.25))
numeric_model.add(Dense(512, activation='relu'))
numeric_model.add(Dropout(0.25))

concat = Merge([image_model, numeric_model], mode='concat')

final = Sequential()
final.add(concat)
final.add(Dense(512, activation='relu'))
final.add(Dropout(0.5))
final.add(Dense(n_classes, activation='softmax'))

final.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])

(Christopher) #411

If you look at vgg16.py you will find the definition for vgg.test().

def test(self, path, batch_size=8):
    test_batches = self.get_batches(path, shuffle=False, batch_size=batch_size, class_mode=None)
    return test_batches, self.model.predict_generator(test_batches, test_batches.nb_sample)

preds is an array of predictions processed by Keras “predict_generator” which you can read about here (near bottom):
https://keras.io/models/sequential/

Since we are working with batches as the dataset cannot be processed in one go, keras uses a “generator” to process each batch and populates the array preds as a result.


(Liberty) #412

I’d like to submit probabilities as answers with two classes. The following code is close, but the output of ‘isdog’ (the label column) is in the wrong format.

import pandas as pd
isdog = preds[:,1]
filenames = test_batches.filenames
id = [filename.split('/')[1].split('.')[0] + ".ppm" for filename in filenames]
pairs = zip(id, isdog)
output = pd.DataFrame(data = pairs, columns=['id', 'label'])
output.to_csv(path + 'dogscatsredux.csv', index=False, quoting=3)

For example, it outputs:
1.172318e-08
1.338704e-03
1.234915e-06
2.200780e-13
1.000000e+00

Can anyone explain what this output is? And how to fix the code so that the number formatting is correct? Thank you!


(Liberty) #413

Thank you, this is very helpful.


(Matthew Kleinsmith) #414

Take a look at these Python examples:

In [4]: format(1.172318e-08, '.20f')
Out[4]: '0.00000001172318000000'

In [5]: format(1.338704e-03, '.20f')
Out[5]: '0.00133870400000000002'

The “N” in “e-N” means you move the decimal over to the left N times (i.e. divide by 10^N).

Kaggle will likely interpret the numbers correctly even if they’re in the e-N form.


Edit:

Python 3.5 writes 1.338704e-03 to files as “0.001338704”

In [15]: with open("test", 'w') as f:
    ...:     print(1.338704e-03, file=f)
    ...:  
[01:21:42] mwk@mwk-ws:~$ cat test
0.001338704

(Raj) #415

awesome…never noticed that little 'upload’button… :slight_smile: thanks !


#416

Hi,
I’d like to know what’s the format of the numpy array ‘preds’ returned by model.predict_generator() in keras.
It returns an array with 2 columns and one line per image. How do I know which column represents ‘cats’ and ‘dogs’? Is it alphabetical order? (In which case column zero is for cats and column one is for dogs)


#417

Well, with predictions you can never be sure which is which :slight_smile: You need to test it with get_batches to see which is which.

IIRC, the generator get batches returns will have #classes and #filenames methods on it. You can compare output from both and see which type of animal your workbook considers to be [0 1] and which [1 0].


(Armand Botha) #418

Hi there,

I’m not sure if this is the correct forum thread but since the first week deals with the setup I figured I’d ask my question here (feel free to move it if there is a more appropriate thread).

So I just started with the MOOC and not being in the US I didn’t want to go with an Amazon account (the exchange rate makes it quite expensive) so I figured I could do the first testing on my local machine and only when I get to the point where I want to run through the whole data set, I’d create an Amazon account and spin up the server.

However I’m having issues just running through the sample data-set. I created a sample of 100 cats and 100 dogs as suggested in the video. I started out with a batch size of 64 and went down to 32. 16 and finally 8. Here is my code:
# Grab the images in batches for the training and validaton process. training_batches = vgg.get_batches(path + 'train', batch_size = batch_size) validation_batches = vgg.get_batches(path + 'valid', batch_size = batch_size * 2) vgg.finetune(training_batches) vgg.fit(training_batches, validation_batches, nb_epoch=1) print("Done!")

I’m expecting to see the “Done!” message printed out, but the script never seems to reach this point. I just get the following and then nothing:
Found 160 images belonging to 2 classes. Found 40 images belonging to 2 classes. Epoch 1/1

I do get the following warning when I run the code, so I’m suspecting a setup fault on my part:
Anaconda2\lib\site-packages\keras\layers\core.py:622: UserWarning: `output_shape` argument not specified for layer lambda_6 and cannot be automatically inferred with the Theano backend. Defaulting to output shape `(None, 3, 224, 224)` (same as input shape). If the expected output shape is different, specify it via the `output_shape` argument. .format(self.name, input_shape))

Unfortunately googling the warning did not help me much.

I did change Keras to use Theano, but other than that I’m using default configurations on everything. Here is what my keras.json file looks like:
{ "image_dim_ordering": "th", "epsilon": 1e-07, "floatx": "float32", "backend": "theano" }

I don’t think the issue is my hardware since my GPU is fairly new; I had to replace it last year and went with a Nvidia 1060.

Any idea what I might be missing?


(Rothrock) #419

@armand – I think that is a warning that you get from using a newer version of keras (or theano) than what is in the course. It isn’t actually an error. It is just saying that you haven’t specified an output shape and it isn’t sure if it should use the same as you input. There should be an argument like input_shape = (3,224,224) – these are 3 channel images of 224 x 224 pixels. if you add output_shape = (3,224,224) the warning will go way.

But since it is just a warning and not an actual error that isn’t the problem. And yes a 1060 card should be able to do this.

The keras.json file looks right. Have you configured your .theanorc file to use the gpu?


#420

Thanks for your answer radek! It wasn’t exactly that but it helped me a lot finding what I needed.
Instead of returning something of the form [0 1], it only returns one number (0 or 1) which I believe corresponds to the index of the class (so probably if I had 3 classes it could return 0,1 or 2). An even cleaner way to find out which index belongs to which class is to look for “class_indices”, contained in the generator that get_batches() returns.

Thanks for your help once again!


(Jeremy Howard (Admin)) #421

Lesson 7 shows how to combine models like this.


(Himanshu) #422

Thanks for the pointer.


#423

Hi guys,
I have downloaded file dogscats.zip on the server AWS P2, but I cannot unzip it. I even cannot 'install unzip’
What should I do to unzip this file?
I heard about Kaggle CLI, should I install it?


(RENJITH MADHAVAN) #424

what is the error you are getting ?


#425

here is my problem:

ubuntu@ip-10-0-0-5:~/nbs/data$ unzip -q dogscats.zip│

The program ‘unzip’ is currently not installed. You │
can install it by typing: │
sudo apt install unzip │
ubuntu@ip-10-0-0-5:~/nbs/data$ sudo apt install unzi│
p ├───────────────────────────────────────────────────
Reading package lists… Done │em]:8888/
Building dependency tree │[I 13:41:29.511 NotebookApp] Use Control-C to stop
Reading state information… Done │this server and shut down all kernels (twice to ski
Package unzip is not available, but is referred to b│p confirmation).
y another package. │[I 13:41:34.224 NotebookApp] 302 GET / (127.0.0.1)
This may mean that the package is missing, has been │0.62ms
obsoleted, or │[I 13:41:34.358 NotebookApp] 302 GET /tree (127.0.0
is only available from another source │.1) 1.08ms
│[I 13:41:41.433 NotebookApp] 302 POST /login?next=%
E: Package ‘unzip’ has no installation candidate


(Jose Luis Ricon) #426

Try a sudo apt-get update && sudo apt-get upgrade first to see if that works.


#427

OMG that worked!
thank you so much guys