Lesson 1 discussion

Very helpful! I figured there was some way … the lecture made it seem like it was inferred somehow.

@wgpubs

Here’s how I found out how to change the classes:

  1. I looked inside vgg.predict because that was the method of interest. Its definition is found in vgg16.py.
  2. I saw that the classes variable was based on self.classes.
  3. I noticed that __init__ calls the get_classes method.
  4. The get_classes method downloads a json with the ImageNet class names and uses it to set self.classes.

I had to look at an example from vgg.predict to know whether “cat” or “dog” should be first in vgg.classes.

4 Likes

Hi all,

Does anyone know where I can find the homework assignments for each lesson?

I looked at the end of the Lesson 1 video, but I didn’t hear mention of the homework in the video itself (Maybe I wasn’t paying close enough attention?)

Thank you!

@Mr_Gradient_Descent

You can find them on the lesson wiki pages.

For example, http://wiki.fast.ai/index.php/Lesson_1#Overview_of_homework_assignment

2 Likes

@Matthew

Thank you, that wiki seems like a great resource! I’ll take a look at the overview.

Much appreciated.

Is there a video and/or discussion about the last part of the Lesson 1 notebook?

There is a section on building your own model but nothing in the Lesson 1 video, notes, or wiki. I’m digging through this part trying to get a real handle on how to code a model myself.

Thanks

One more question. Working on the Dogs and Cats redux. When I run the notebook on all the data i get a large error message as seen below. However if i just run it on the sample data it works fine. So either i have the directory structure incorrect or something else I dont understand is going on. I believe I setup the directory correct. I followed the layout as in the dogs and cats redux notebook, and ran the scripts in that note book to move and create the data and associated folders. Thanks ahead of time.

Found 23000 images belonging to 6 classes.
Found 2000 images belonging to 2 classes.
Epoch 1/1
22976/23000 [============================>.] - ETA: 0s - loss: 0.1479 - acc: 0.9626

Exception Traceback (most recent call last)
in ()
5 val_batches = vgg.get_batches(path+‘valid’, batch_size=batch_size*2)
6 vgg.finetune(batches)
----> 7 vgg.fit(batches, val_batches, nb_epoch=1)

/home/ubuntu/nbs/lesson1/vgg16.pyc in fit(self, batches, val_batches, nb_epoch)
115 def fit(self, batches, val_batches, nb_epoch=1):
116 self.model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=nb_epoch,
–> 117 validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
118
119

/home/ubuntu/anaconda2/lib/python2.7/site-packages/keras/models.pyc in fit_generator(self, generator, samples_per_epoch, nb_epoch, verbose, callbacks, validation_data, nb_val_samples, class_weight, max_q_size, nb_worker, pickle_safe, **kwargs)
872 max_q_size=max_q_size,
873 nb_worker=nb_worker,
–> 874 pickle_safe=pickle_safe)
875
876 def evaluate_generator(self, generator, val_samples, max_q_size=10, nb_worker=1, pickle_safe=False, **kwargs):

/home/ubuntu/anaconda2/lib/python2.7/site-packages/keras/engine/training.pyc in fit_generator(self, generator, samples_per_epoch, nb_epoch, verbose, callbacks, validation_data, nb_val_samples, class_weight, max_q_size, nb_worker, pickle_safe)
1469 val_outs = self.evaluate_generator(validation_data,
1470 nb_val_samples,
-> 1471 max_q_size=max_q_size)
1472 else:
1473 # no need for try/except because

/home/ubuntu/anaconda2/lib/python2.7/site-packages/keras/engine/training.pyc in evaluate_generator(self, generator, val_samples, max_q_size, nb_worker, pickle_safe)
1552 'or (x, y). Found: ’ + str(generator_output))
1553 try:
-> 1554 outs = self.test_on_batch(x, y, sample_weight=sample_weight)
1555 except:
1556 _stop.set()

/home/ubuntu/anaconda2/lib/python2.7/site-packages/keras/engine/training.pyc in test_on_batch(self, x, y, sample_weight)
1251 x, y, sample_weights = self._standardize_user_data(x, y,
1252 sample_weight=sample_weight,
-> 1253 check_batch_dim=True)
1254 if self.uses_learning_phase and type(K.learning_phase()) is not int:
1255 ins = x + y + sample_weights + [0.]

/home/ubuntu/anaconda2/lib/python2.7/site-packages/keras/engine/training.pyc in _standardize_user_data(self, x, y, sample_weight, class_weight, check_batch_dim, batch_size)
963 output_shapes,
964 check_batch_dim=False,
–> 965 exception_prefix=‘model target’)
966 sample_weights = standardize_sample_weights(sample_weight,
967 self.output_names)

/home/ubuntu/anaconda2/lib/python2.7/site-packages/keras/engine/training.pyc in standardize_input_data(data, names, shapes, check_batch_dim, exception_prefix)
106 ’ to have shape ’ + str(shapes[i]) +
107 ’ but got array with shape ’ +
–> 108 str(array.shape))
109 return arrays
110

Exception: Error when checking model target: expected dense_13 to have shape (None, 6) but got array with shape (128, 2)

@cmeff1Cats and Dogs competition should have 2 classes - cats and dogs.

Found 23000 images belonging to 6 classes.

This looks incorrect. I believe you’ll need to check your training directory structure. It should be more on the lines of your validation set - Found 2000 images belonging to 2 classes.

The number of classes is the number of folders in your train or valid directory.

2 Likes

Gotcha I’ll check that out. Thanks for the feedback.

I could use a little help in completing Lesson 1 homework.

I have created a notebook by following Jeremy’s instructions in Lesson 2 explaining how he would complete lesson 1 homework.
The first problem I ran into was no directory found error when running the first vgg.get_batches under Fine Tune section
I resolved that with a cd then I started getting Found 0 images message. I have reviewed each step but still was unable to resolve the 0 images issue.

I’m running the notebook on an aws-p2 instance. You can see the annotated notebook at https://github.com/asunar/mydeeplearning/blob/master/nbs/lesson1-homework.ipynb. The repo does not have the data folder, however the output of the numerous ls commands in the notebook should give you a pretty good idea of what my data folder looked like.

Thanks for your help,
Alper

@jeremy I am surprised that you have the dedication and the time for all of this troubleshooting, but it is most certainly appreciated!

I hit an issue running lession 1 from just watching the video (which only shows downloading vgg.py and utils.py) and figured I would share how I solved it in case anyone else hits it as well. When I ran this step:

import utils; reload(utils)
from utils import plots

I got the following response:


ImportError Traceback (most recent call last)
in ()
----> 1 import utils; reload(utils)
2 from utils import plots

/home/ubuntu/nbs/utils.py in ()
50
51 from vgg16 import *
—> 52 from vgg16bn import *
53 np.set_printoptions(precision=4, linewidth=100)
54

ImportError: No module named vgg16bn

I went back and looked at your github repo and noticed the vgg16bn.py in that folder so I ran this command from the /nbs folder:
wget https://raw.githubusercontent.com/fastai/courses/master/deeplearning1/nbs/vgg16bn.py

and it solved my issue. I am guessing that I should probably download the full list of files from the github folder here so that it is ready for future lessons?

Hope that helps someone else.

Did anyone run into this problem when trying to run lesson 1?
This is the last step at the basic setup:


ImportError Traceback (most recent call last)
in ()
----> 1 import utils; reload(utils)
2 from utils import plots

/home/ubuntu/nbs/Lesson 1/utils.py in ()
47 from keras.metrics import categorical_crossentropy, categorical_accuracy
48 from keras.layers.convolutional import *
—> 49 from keras.callbacks import ReduceLROnPlateau
50 from keras.preprocessing import image, sequence
51 from keras.preprocessing.text import Tokenizer

ImportError: cannot import name ReduceLROnPlateau

I would suggest using git clone to grab the whole directory at once :slight_smile:

That’s fixed in the most recent github version. git pull to update if you’ve already done git clone

I was wondering did anyone else also have the problem of predictions are all 1., 1., 1., 1., 1., 1.?

I believe that I saw it somewhere and the cause might have been the model being overconfident? Is there anyway to solve this problem? I believe that I am not overfitting my model.

Thanks!!!

@yzhao76 Are you getting any 0s as predictions, in addition to 1s?

If you are only getting 0.0 and 1.0, with no decimals between those values, over-confidence is an issue, and you can use np.clip to set higher and lower bounds.

If you are predicting 1 for everything, then your classifier is not working properly, and you would need to post more details.

1 Like

Hi Rachel. Thanks for your prompt response! My probabilities are all 1 and the indexes are 0 and 1s. Thanks!

I think I might know what my problem is. My test files are not ordered. I got it now! thanks!

1 Like

If you run into modulenotfounderror with cpickle or pickle when running the notebooks on os x, change utils.py:

#import cPickle as pickle
import six.moves.cPickle as pickle

Might impact performance, don’t know, but at least it works.

Whats the difference between ‘vgg16.h5’ and ‘vgg16_bn.h5’? Which of those two files gets downloaded by ‘Vgg16()’? Where is it saved on Windows?