Lesson 2 discussion

This was the solution, I didn’t realize that changing from flow to flow_from_directory needed the input size as an extra parameter. Thank you!!

It seems I had to go back to lesson 2 again…

The code for the third lesson loads the ‘finetune3.h5’ file obtained on the last piece of code of the lesson 2. I had to skip this part because the 4-epoch training was taking too much time for me to compute. This is the line of code I’m talking about:

fit_model(model, batches, val_batches, 4)

Every time I executed this piece of code before (with 2 and 3 epochs, during the lesson 2) the text indicators of the process ended up freezing. For instance, the number of processed batches growed normally until 29500, for instance, and then everything would freeze up until the end of the execution, where all the remaining verbosity would appear at once and the correct would end its execution correctly. Approximately, the 2-epoch training would take around 15 minutes, and the 3-epoch training, around 30 minutes.

However, I executed the 4-epoch training stated before for almost 2 hours and it didn’t finish, the notebook cell still had the [*] execution symbol, and of course the verbosity was frozen, as usual. As I didn’t have time, I interrupted the execution to start with lesson 3 next day. My surprise was to see that this 4-epoch training was loaded at start on lesson 3, so I decided to come back to lesson 2 and spend a full night with the execution of the 4-epoch training, thinking that it could be a matter of time/computational load. It has been running for almost 10 hours and it still has not finished, with the verbosity, of course, frozen. I’m not thrown any error, and I don’t see anything wrong, it simply seems to be stuck.

I suppose I could load the 3-epoch training weights at lesson 3 start instead, right? However, I’m not sure it would work as expected, and I would like to know what is happening to the 4-epoch training anyway, I would like to know why it seems to be neverending.

Can anybody help me? Thanks in advance, again.

EDIT: I found the solution, after tryting everything I simply tried to do the same in Firefox instead of Chrome. Worked perfectly, with no frozen verbosity and ending the computation after training the 4th epoch, as expected.

1 Like

There is a problem sometimes with jupyter and keras progress bars where it freezes. The solution is to set verbose=0; and if you want progress bars then pip install keras_tqdm.

I have an issue with this section below (1st) taken from cats_dogs_redux it was to generate the files. Now it did work first time around. I wanted to change the number of files in samples around, but all the ‘glob’ sections including samples have spat the dummy. I deleted files, folders for valid, sample train and valid and rebooted, but I get the error below (2nd). Clearly there’s a zero value in there it does not like but first time around all was good it worked. I tried to change my sample sizes for train and validation to make them bigger, and now I have a mess. Would seem easy to fix, erase all the folders and start again but that does not work!

g = glob(’*.jpg’)
shuf = np.random.permutation(g)
for i in range(2000): os.rename(shuf[i], DATA_HOME_DIR+’/valid/’ + shuf[i])


IndexError Traceback (most recent call last)
in ()
1 g = glob(’*.jpg’)
2 shuf = np.random.permutation(g)
----> 3 for i in range(2000): os.rename(shuf[i], DATA_HOME_DIR+’/valid/’ + shuf[i])

IndexError: index 0 is out of bounds for axis 0 with size 0

1 Like

Hi, Jeremy,

I try to understand the numpy correlate calculation. And What I get is as follow:
The result of np.correlate([1, 2, 3], [0, 4, 5], “same”) is array([14, 23, 12]);
The result of np.correlate([1, 2, 3], [0, 4, 5], “full”) is array([ 5, 14, 23, 12, 0]).

How to understand that result of numpy correlate? It seems that the calculation is different from what you show in this table.

Hey Everyone-

I’m reading through the sgd-intro notebook, and I understand everything except for the derivatives of the loss function:

d[(y-(a*x+b))**2,b] = 2 (b + a x - y) = 2 (y_pred - y)

d[(y-(a*x+b))**2,a] = 2 x (b + a x - y) = x * dy/db

Could someone explain to me how we arrive at these partial derivatives?

Thanks.

z = y-ypred = y - (ax+b)
loss = z**2

dz/da=-x
dz/db=-1
dloss/dz=2*z

chain rule

  • dloss/da = dloss/dz * dz/da = -2zx = x * dloss/db
  • dloss/db = dloss/dz * dz/db = -2z = 2(y_pred - y)
2 Likes

Thanks simoneva! That makes sense.

sys.path.insert(1, os.path.join(sys.path[0], '..'))

I’m having trouble importing utils despite using the above line. I tried variations of the above line as well.

My current sys.path is

[‘C:\Users\Moondra\ALL NOTEBOOKS\deep_learn_with_keras\lesson_1\…’,
’…’,
‘C:\Users\Moondra\ALL NOTEBOOKS\deep_learn_with_keras\lesson_1\…’,
’’,
’…’,
’…’,
’…’,
’…’,
‘c:\python27\lib\site-packages\twilio-6.3.dev0-py2.7.egg’,
‘c:\python27\lib\site-packages\pytz-2016.6.1-py2.7.egg’,
‘c:\python27\lib\site-packages\six-1.10.0-py2.7.egg’,
‘c:\python27\lib\site-packages\httplib2-0.9.2-py2.7.egg’,
‘C:\Windows\system32\python27.zip’,
‘c:\python27\DLLs’,
‘c:\python27\lib’,
‘c:\python27\lib\plat-win’,
‘c:\python27\lib\lib-tk’,
‘c:\python27’,
‘c:\python27\lib\site-packages’,
‘c:\python27\lib\site-packages\IPython\extensions’,
‘C:\Users\Moondra\.ipython’,
‘C:\Users\Moondra\ALL NOTEBOOKS\deep_learn_with_keras\lesson_1’]

Thank you.

Looking at the video from Lesson 2, at 42:06 it says

probs = probs[:, 0]
preds = np.round(1-probs)

so I’m curious, why isn’t this just preds = np.round(probs[:,1]) when the columns sum to 1 anyway?

On my data it seems to be equivalent

np.allclose(np.round(probs[:,0]), np.round(1-probs[:,1]))
> True

Onehot vs to_categorical - It depends- in some cases, we want to save memory.

I tried, as an experiment, to replace

lm = Sequential([ Dense(2, activation='softmax', input_shape=(1000,)) ])
lm.compile(optimizer=RMSprop(lr=0.1), loss='categorical_crossentropy', metrics=['accuracy'])

with

lm = Sequential([ Dense(1, activation='sigmoid', input_shape=(1000,)) ])
lm.compile(optimizer=RMSprop(lr=0.1), loss='binary_crossentropy', metrics=['accuracy'])

Since in theory, if I understood correctly, applying softmax to 2 outputs is the same as applying sigmoid to a single output. However, my accuracy drops to about 0.87. Does anyone have any idea why this might be?

If you are using single output, then you might have to vary the threshold to decide on 1 and 0. This might be the reason for decreased accuracy.

Sad thing always happens (:confounded:. I get ‘ImportError: cannot import name layer_from_config
’ error when I run the ‘utils.py’ script. I google it, but there is no useful information about it.

I got it by https://github.com/fchollet/keras/issues/5870.
my solution: from keras.layers import deserialize as layer_from_config.

Did anyone else attempt the CIFAR 10 competition as part of their Lesson 2 HW? I repurposed the vgg16 model from lesson 1 with some success. By tweaking parameters I got results that score in the top 50% of what was originally submitted during the competition.

Hi @rachel, After clipping is performed , do the values still hold the properties of probability i.e does the sum equal to 1 always ?

Has anyone else looked specifically at the image file for cat.4688.jpg?

During my visual review of what the model was getting wrong, I stumbled upon this and I am wondering if this file is included in the training set in error by Kaggle?

File Oddities:

  1. It appears to be a logo for “Planned Pethood Inc.” (it is a circular logo and not a photo)
  2. The logo has a cartoon drawing with both a cat AND a dog on it. The drawing of the dog is in fact bigger than the cat.
  3. It appears in the training directory for cats.

I found this by accident since it was included as one of the random filenames in the validation set I created by following along with Jeremy’s lesson for creating and moving the validation set from the training set, then visualizing a few images when the model got labels incorrect.

Has anyone else seen this image?

1 Like

I have a strange accuracy evolution during epoch - accuracy increasing to 0.99 at the beginning, and then falls down to 0.5 at the end.
Here is my code:

from vgg16 import Vgg16
vgg = Vgg16()
model = vgg.model
model.pop()
for layer in model.layers: layer.trainable = False
model.add(Dense(2, activation=‘softmax’))
model.compile(optimizer=RMSprop(lr=0.1), loss=‘categorical_crossentropy’, metrics=[“accuracy”])
val_batches = get_batches(valid_path, batch_size=64, shuffle=False)
batches = get_batches(train_path, batch_size=64, shuffle=False)
model.fit_generator(batches, samples_per_epoch=batches.n, nb_epoch=1,
validation_data=val_batches, nb_val_samples=val_batches.n)

What can I do wrong?

I realized, what’s going wrong - shuffle=False ruins all. So at the first time model training only on 1 class, and then on 2 class.

2 Likes