Lesson 1 discussion


(Andrea de Luca) #623

Hi.

I noticed that the second half of the lesson 1 notebook is about building a model from scratch, in keras.
It is quite interesting, but neither lesson 1 video nor lesson 1 notes do mention it. How’s that?


(Tom Eaton) #624

What sort of Kaggle score is possible to achieve with this model? I am getting a kaggle score of around 11, and this seems bad. I am not sure how to optimise this more, please advise me.


#625

Hi,

I am not sure I understand your question. Maybe you are curious about the inconsistent between the notebook and video?

The notebook is updated 4 months ago while the video is recorded one or two years ago(according to the history on github). So there might be something different between these two resources. It is ok to go on learning.


(Andrea de Luca) #626

Thanks. I imagined the videos were a bit outdated.
It would be great if they decide to release the latest videos, provided that I have absolutely nothing to complain, since all material is released for free.

However, to be more specific, I was referring to the part of lesson1.ipynb which starts with “Create a VGG model from scratch in Keras”.


#627

You can definitely improve from 11. Here’s what I did:
how-did-i-get-into-top-50-of-kaggle-competition-dogs-vs-cats

It might give you some ideas.


(shweta ) #628

i am suffering with data downloading issue.
currently i am using floydhub.
I have cloned github repositiory.
and set the path.
now i am on data downloading and mouting step.
so my question is:
i need to set path as below then download data and unzip it in below directory or we can download data anywhere and use it ?
path for data: " courses\deeplearning1\nbs\data\ "


(Ardhendu Pathak) #629

Hi,
I am a newbie (and struggling but making progress slowly).
On typing “git clone https://github.com/fastai/courses.git” ( t2 instance) I got the following message: " The program ‘git’ is currently not installed. You can install it by typing: sudo apt-get install git". On doing this, i got a series of errors such as “404 Not Found [IP: 54.213.249.49 80]” etc.
I solved the problem by cloning the repository on my desktop, and then using the upload button on ipython notebook to get lesson1, vgg, and utils notebooks.


(Navin Kumar) #630

it would be better to do as below:

This would first get the repositories from where software packages would get installed

  1. sudo apt-get update

This would install the package git

  1. sudo apt-get install git

hope it helps
navin


(Navin Kumar) #631

By mistake the earlier post had unintended bold statements… sorry


(Ardhendu Pathak) #632

Yess! This did it. Thanks Navin.


(Ranjita Naik) #633

I’m trying to run Lesson 1 notebooks on Microsoft Azure. Both anaconda 3 + pyhton 3.6 and anaconda 2 + python 2.7 are throwing the following error on running vgg.fit. Any idea what could be the issue?

vgg.fit(batches, val_batches, nb_epoch=1)

Error
anaconda3_501/lib/python3.6/site-packages/PIL/Image.py", line 2519, in open
if mode != “r”:
OSError: cannot identify image file ‘data/dogscats/train/dogs/dog.5149.jpg’

ValueError Traceback (most recent call last)
in ()
----> 1 vgg.fit(batches, val_batches, nb_epoch=1)

~/library/vgg16.py in fit(self, batches, val_batches, nb_epoch)
211 “”"
212 self.model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=nb_epoch,
–> 213 validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
214
215

~/anaconda3_501/lib/python3.6/site-packages/keras/models.py in fit_generator(self, generator, samples_per_epoch, nb_epoch, verbose, callbacks, validation_data, nb_val_samples, class_weight, max_q_size, nb_worker, pickle_safe, initial_epoch, **kwargs)
933 nb_worker=nb_worker,
934 pickle_safe=pickle_safe,
–> 935 initial_epoch=initial_epoch)
936
937 def evaluate_generator(self, generator, val_samples,

~/anaconda3_501/lib/python3.6/site-packages/keras/engine/training.py in fit_generator(self, generator, samples_per_epoch, nb_epoch, verbose, callbacks, validation_data, nb_val_samples, class_weight, max_q_size, nb_worker, pickle_safe, initial_epoch)
1530 '(x, y, sample_weight) '
1531 'or (x, y). Found: ’ +
-> 1532 str(generator_output))
1533 if len(generator_output) == 2:
1534 x, y = generator_output

ValueError: output of generator should be a tuple (x, y, sample_weight) or (x, y). Found: None


(Irshad Muhammad) #634

Hi guys,
Facing problem with prediction and preparing a results csv
I have been able to fit and finetune the dogsandcats data set. I am unable to find a way assign the prediction to the name of pictures. I am using the following code to make prediction (test2 contains test1 which has unlabeled data)
batches, preds = vgg.test('data/dogscats/test2', batch_size=batch_size)

Can anyone please guide me, from above code how to assign the names of pictures to cat or dog.


(Irshad Muhammad) #635

Hello guys,
can anyone answer my question from above?
It is the second day no answer yet :cry:
Just repeating my question again
Hi guys,
Facing problem with prediction and preparing a results csv
I have been able to fit and finetune the dogsandcats data set. I am unable to find a way assign the prediction to the name of pictures. I am using the following code to make prediction (test2 contains test1 which has unlabeled data)
batches, preds = vgg.test('data/dogscats/test2', batch_size=batch_size)

Can anyone please guide me, from above code how to assign the names of pictures to cat or dog.
@jeremy


#636

The batches has a ‘filenames’ attribute, I just parsed the filenames to get ID, one by one along with its prediction.
e.g.
for img_path,pred in zip(batches.filenames, preds):


(Irshad Muhammad) #637

Thanks man :), it did the trick :slight_smile:


#638

Hi, I followed lesson 1 and 2 to train on the dogs and cats classes with vgg16. Then i changed to a different subject on custom 9 different classes. i got around 80% accuracy for the 9 different classes. The thing is i want to save the weights to load later so i don’t have to train again (for example to be deployed as an api). But it is guessing the 9 different classes all wrong.
Shouldn’t it just be to do this?

vgg = Vgg16()
vgg = vgg.model.load_weights(‘custom_weights.h5’)
vgg.classes = [ 9 different classes ]

and then just predict?

Regards

EDIT

found the answer to my own question
I changed to:

vgg = load_model( h5 model file)
vgg.model.load_weights( h5 weights file
also vgg.summary() must present 9 classes on the last layer
vgg will be a sequential model so can run vgg.predict


(Rakesh Kelkar) #640

batches yields an infinite sequence it seems - going to try the pandas example posted later in the thread :slight_smile:


(Ankit Akash Jha) #641

Lesson 1 : Memory Error

I am working on a T2 server on AWS and while going through the steps following video 1. I got stuck while running this line of command

vgg = Vgg16()

![19 PM|690x431]

the error it is showing is MemoryError:

Does it has something todo with me not using p2 server. Please guide me through it. Also I my P2 server request is struck in processing. So is there any other way to proceed with it?


(Aditya) #642

Yes you don’t have sufficient memory…


(Ajay Singh) #643

I started this course.
Steps done:
Step 1: Install Ubuntu latest version on my desktop
Step 2: Downloaded the files from Git hub
Step 3: Ran ubuntu setup shell file
Step 4: on firefox : localhost:8888 to enter the Jupyter Notebook
Step 5: Went to Lesson 1 and opened the notebook and then all I did was Restart the Kernel and Run the code.
Step 6: the Error states that something was removed from theano…Need help to resolve this and what needs to be done.

/--------------------------------------
'You are tring to use the old GPU back-end. '
117 'It was removed from Theano. Use device=cuda* now. '
118 'See https://github.com/Theano/Theano/wiki/Converting-to-the-new-gpu-back-end(gpuarray) '
/----------------------------------------
Now when I run the code it keeps failing to install the utils package. Here is the Error I get:

alueError Traceback (most recent call last)
in ()
----> 1 import utils; reload(utils)
2 from utils import plots

/home/ajay/Downloads/courses-master/deeplearning1/nbs/utils.py in ()
26 from IPython.lib.display import FileLink
27
—> 28 import theano
29 from theano import shared, tensor as T
30 from theano.tensor.nnet import conv2d, nnet

/home/ajay/anaconda2/lib/python2.7/site-packages/theano/init.pyc in ()
86
87
—> 88 from theano.configdefaults import config
89 from theano.configparser import change_flags
90

/home/ajay/anaconda2/lib/python2.7/site-packages/theano/configdefaults.py in ()
135 “letters, only lower case even if NVIDIA uses capital letters.”),
136 DeviceParam(‘cpu’, allow_override=False),
–> 137 in_c_key=False)
138
139 AddConfigVar(

/home/ajay/anaconda2/lib/python2.7/site-packages/theano/configparser.pyc in AddConfigVar(name, doc, configparam, root, in_c_key)
285 # This allow to filter wrong value from the user.
286 if not callable(configparam.default):
–> 287 configparam.get(root, type(root), delete_key=True)
288 else:
289 # We do not want to evaluate now the default value

/home/ajay/anaconda2/lib/python2.7/site-packages/theano/configparser.pyc in get(self, cls, type_, delete_key)
333 else:
334 val_str = self.default
–> 335 self.set(cls, val_str)
336 # print “RVAL”, self.val
337 return self.val

/home/ajay/anaconda2/lib/python2.7/site-packages/theano/configparser.pyc in set(self, cls, val)
344 # print “SETTING PARAM”, self.fullname,(cls), val
345 if self.filter:
–> 346 self.val = self.filter(val)
347 else:
348 self.val = val

/home/ajay/anaconda2/lib/python2.7/site-packages/theano/configdefaults.py in filter(val)
114 elif val.startswith(‘gpu’):
115 raise ValueError(
–> 116 'You are tring to use the old GPU back-end. '
117 'It was removed from Theano. Use device=cuda* now. '
118 'See https://github.com/Theano/Theano/wiki/Converting-to-the-new-gpu-back-end(gpuarray)

ValueError: You are tring to use the old GPU back-end. It was removed from Theano. Use device=cuda* now. See https://github.com/Theano/Theano/wiki/Converting-to-the-new-gpu-back-end(gpuarray) for more information.