Setup problems: Running the Lesson 1 Notebook

Hi Thanks for you input, could you please add some information about how you found that setting. I am interested in using say 5 of my 8 cores is there anything else I should do. Thanks

Edited: Ok I see the environment variable in you previous post. So I edited ~/.theanorc and added a section [global] in which I put the environment variable and the suggested true statement. But I don’t see evidence of improvement. I noted that there is no python module, openmp in python. there is an openmpi module. Not sure they are the same.

It seems to use openmp you must first release the GIL and write code to handle the parallelism
My presumption that this is all handled by theano perhaps is a hopeful one
I’ve also noted that perhaps I should add cython to my environment.
Like most parallel issues this has got difficult quickly and although I would like to use my cores I feel this is a step to far here.

Unable to instantiate Vgg16 object

Whenever I run

vgg = Vgg16()
# Grab a few images at a time for training and validation.
# NB: They must be in subdirectories named based on their category
batches = vgg.get_batches(path+'train', batch_size=batch_size)
val_batches = vgg.get_batches(path+'valid', batch_size=batch_size*2)
vgg.finetune(batches)
vgg.fit(batches, val_batches, nb_epoch=1)

I get the following error:

OError                                   Traceback (most recent call last)
<ipython-input-24-2b6861506a11> in <module>()
----> 1 vgg = Vgg16()
      2 # Grab a few images at a time for training and validation.
      3 # NB: They must be in subdirectories named based on their category
      4 batches = vgg.get_batches(path+'train', batch_size=batch_size)
      5 val_batches = vgg.get_batches(path+'valid', batch_size=batch_size*2)

/Users/indraner/dev/datascience/fastai/courses/deeplearning1/nbs/vgg16.py in __init__(self)
     31     def __init__(self):
     32         self.FILE_PATH = 'http://www.platform.ai/models/'
---> 33         self.create()
     34         self.get_classes()
     35 

/Users/indraner/dev/datascience/fastai/courses/deeplearning1/nbs/vgg16.py in create(self)
     80 
     81         fname = 'vgg16.h5'
---> 82         model.load_weights('/Users/indraner/dev/datascience/fastai/data/dogsVcats/vgg16.h5')
     83 
     84 

/usr/local/lib/python2.7/site-packages/keras/engine/topology.pyc in load_weights(self, filepath, by_name)
   2512         '''
   2513         import h5py
-> 2514         f = h5py.File(filepath, mode='r')
   2515         if 'layer_names' not in f.attrs and 'model_weights' in f:
   2516             f = f['model_weights']

/usr/local/lib/python2.7/site-packages/h5py/_hl/files.pyc in __init__(self, name, mode, driver, libver, userblock_size, swmr, **kwds)
    270 
    271                 fapl = make_fapl(driver, libver, **kwds)
--> 272                 fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
    273 
    274                 if swmr_support:

/usr/local/lib/python2.7/site-packages/h5py/_hl/files.pyc in make_fid(name, mode, userblock_size, fapl, fcpl, swmr)
     90         if swmr and swmr_support:
     91             flags |= h5f.ACC_SWMR_READ
---> 92         fid = h5f.open(name, flags, fapl=fapl)
     93     elif mode == 'r+':
     94         fid = h5f.open(name, h5f.ACC_RDWR, fapl=fapl)

h5py/_objects.pyx in h5py._objects.with_phil.wrapper (/Users/travis/build/MacPython/h5py-wheels/h5py/h5py/_objects.c:2687)()

h5py/_objects.pyx in h5py._objects.with_phil.wrapper (/Users/travis/build/MacPython/h5py-wheels/h5py/h5py/_objects.c:2645)()

h5py/h5f.pyx in h5py.h5f.open (/Users/travis/build/MacPython/h5py-wheels/h5py/h5py/h5f.c:1933)()

IOError: Unable to open file (Truncated file: eof = 237635312, sblock->base_addr = 0, stored_eoa = 553482496)

I have downloaded the .h5 file and trying load it from local disk, since I was getting error while loading it from the ‘platform.ai’ url

Also the code is the latest from github.
Can someone pls help?

I am in the ~/nbs directory but got the error of No module named utils. I used ls command to check the directory and only found the .ipynb file but nothing else. Am I supposed to see a bunch of .py files in this directory?

I am also having the problem where I cannot instantiate the Vg16() object;

ImportError Traceback (most recent call last)
in ()
----> 1 vgg = Vgg16()
2 # Grab a few images at a time for training and validation.
3 # NB: They must be in subdirectories named based on their category
4 batches = vgg.get_batches(path+‘train’, batch_size=batch_size)
5 val_batches = vgg.get_batches(path+‘valid’, batch_size=batch_size*2)

/home/gpp8p/PycharmProjects/dlcourse/vgg16.pyc in init(self)
31 def init(self):
32 self.FILE_PATH = ‘http://www.platform.ai/models/
—> 33 self.create()
34 self.get_classes()

I have verified that my keras is set to run as theano - in fact, the
notebook says “Using Theano backend.”

Any help would be greatly appreciated

I’m looking at this error, and I see this:

31 def init(self):
32 self.FILE_PATH = ‘http://www.platform.ai/models/
—> 33 self.create()
34 self.get_classes()

what I’m wondering is this: I am running on a stand-along Linux box
with its own CUDA board. What would be the right value for
FILE_PATH in those circumstances ?

-George Pipkin

when I try to run it in straight python, I end up getting this error:
ImportError: (‘The following error happened while compiling the node’, DeepCopyOp(convolution2d_1_W), ‘\n’, ‘/home/gpp8p/.theano/compiledir_Linux-4.8–generic-x86_64-with-debian-stretch-sid-x86_64-2.7.13-64/tmp8Z3X8i/6bce617acb30aa3bbe8d048c82a553bc.so: undefined symbol: _ZdlPvm’, ‘[DeepCopyOp(convolution2d_1_W)]’)

After doing a good deal of Googling, I discover that this has to do with an incompatability in compliers:

Not sure how to get g++ version 5, but that looks like what it wants

Unfortunately changing the .theanorc to include:

cxx = /usr/bin/g+±5

does not remedy this issue

I was able to get through this error. The key is in the .theanorc file. Mine looks like this:

[blas]
ldflags =

[global]
floatX = float32
device = gpu

By default the compiled files were being written to my local network drive.

Since I have limited space on this drive (on a school’s network),

we can change the path to compile the files on the local machine.

You will have to create the directories and modify according to where you

want to install the files.

Uncomment if you want to change the default path to your own.

base_compiledir = /local-scratch/jer/theano/

[nvcc]
fastmath = True

[gcc]
cxxflags = -ID:\MinGW\include
cxx = /usr/bin/g+±5

[cuda]

Set to where the cuda drivers are installed.

You might have to change this depending where your cuda driver/what version is installed.

root=/usr/local/cuda-8.0/

I met the same issue, but simply changing the configuration in theanorc doesnt work… any idea?

somehow I fixed the issue by downloading the vgg.h5 then move it to ~/.keras/models/
it could be network issue

Try to insert the directory path to the module searching path

import sys
sys.path.insert(0, ‘your local nbs directory path’)

I met the same issue。
After arduous attempts, I’ve resolved it through configuring my .theanorc file as following:

[global]
device = gpu0
floatX = float32

[cuda]
root = /usr/local/cuda-8.0
[gcc]
cxxflags=-march=corei7-avx

It is worth noting that

  1. “/usr/local/cuda-8.0” should be consistent with the system
    2)“corei7-avxcorei7-avx” should be matched with your cpu model

Hi,

I am trying to use my ubuntu 16.04. I set it up by https://github.com/fastai/courses/blob/master/setup/install-gpu.sh. When I ran lesson 1 notebook, I will got memory error. Not sure what is the reason for it.

---------------------------------------------------------------------------
MemoryError                               Traceback (most recent call last)
<ipython-input-8-2b6861506a11> in <module>()
----> 1 vgg = Vgg16()
      2 # Grab a few images at a time for training and validation.
      3 # NB: They must be in subdirectories named based on their category
      4 batches = vgg.get_batches(path+'train', batch_size=batch_size)
      5 val_batches = vgg.get_batches(path+'valid', batch_size=batch_size*2)

/home/dongguo/Documents/courses/deeplearning1/nbs/vgg16.pyc in __init__(self)
     30     def __init__(self):
     31         self.FILE_PATH = 'http://files.fast.ai/models/'
---> 32         self.create()
     33         self.get_classes()
     34 

/home/dongguo/Documents/courses/deeplearning1/nbs/vgg16.pyc in create(self)
     74 
     75         model.add(Flatten())
---> 76         self.FCBlock()
     77         self.FCBlock()
     78         model.add(Dense(1000, activation='softmax'))

/home/dongguo/Documents/courses/deeplearning1/nbs/vgg16.pyc in FCBlock(self)
     59     def FCBlock(self):
     60         model = self.model
---> 61         model.add(Dense(4096, activation='relu'))
     62         model.add(Dropout(0.5))
     63 

/home/dongguo/anaconda2/lib/python2.7/site-packages/keras/models.pyc in add(self, layer)
    330                  output_shapes=[self.outputs[0]._keras_shape])
    331         else:
--> 332             output_tensor = layer(self.outputs[0])
    333             if isinstance(output_tensor, list):
    334                 raise TypeError('All layers in a Sequential model '

/home/dongguo/anaconda2/lib/python2.7/site-packages/keras/engine/topology.pyc in __call__(self, x, mask)
    544                                      '`layer.build(batch_input_shape)`')
    545             if len(input_shapes) == 1:
--> 546                 self.build(input_shapes[0])
    547             else:
    548                 self.build(input_shapes)

/home/dongguo/anaconda2/lib/python2.7/site-packages/keras/layers/core.pyc in build(self, input_shape)
    796                                  name='{}_W'.format(self.name),
    797                                  regularizer=self.W_regularizer,
--> 798                                  constraint=self.W_constraint)
    799         if self.bias:
    800             self.b = self.add_weight((self.output_dim,),

/home/dongguo/anaconda2/lib/python2.7/site-packages/keras/engine/topology.pyc in add_weight(self, shape, initializer, name, trainable, regularizer, constraint)
    416         """
    417         initializer = initializations.get(initializer)
--> 418         weight = initializer(shape, name=name)
    419         if regularizer is not None:
    420             self.add_loss(regularizer(weight))

/home/dongguo/anaconda2/lib/python2.7/site-packages/keras/initializations.pyc in glorot_uniform(shape, name, dim_ordering)
     64     fan_in, fan_out = get_fans(shape, dim_ordering=dim_ordering)
     65     s = np.sqrt(6. / (fan_in + fan_out))
---> 66     return uniform(shape, s, name=name)
     67 
     68 

/home/dongguo/anaconda2/lib/python2.7/site-packages/keras/initializations.pyc in uniform(shape, scale, name, dim_ordering)
     31 
     32 def uniform(shape, scale=0.05, name=None, dim_ordering='th'):
---> 33     return K.random_uniform_variable(shape, -scale, scale, name=name)
     34 
     35 

/home/dongguo/anaconda2/lib/python2.7/site-packages/keras/backend/theano_backend.pyc in random_uniform_variable(shape, low, high, dtype, name)
    187 def random_uniform_variable(shape, low, high, dtype=None, name=None):
    188     return variable(np.random.uniform(low=low, high=high, size=shape),
--> 189                     dtype=dtype, name=name)
    190 
    191 

/home/dongguo/anaconda2/lib/python2.7/site-packages/keras/backend/theano_backend.pyc in variable(value, dtype, name)
     85     else:
     86         value = np.asarray(value, dtype=dtype)
---> 87         variable = theano.shared(value=value, name=name, strict=False)
     88     variable._keras_shape = value.shape
     89     variable._uses_learning_phase = False

/home/dongguo/anaconda2/lib/python2.7/site-packages/theano/compile/sharedvalue.pyc in shared(value, name, strict, allow_downcast, **kwargs)
    266             try:
    267                 var = ctor(value, name=name, strict=strict,
--> 268                            allow_downcast=allow_downcast, **kwargs)
    269                 utils.add_tag_trace(var)
    270                 return var

/home/dongguo/anaconda2/lib/python2.7/site-packages/theano/sandbox/cuda/var.pyc in float32_shared_constructor(value, name, strict, allow_downcast, borrow, broadcastable, target)
    186         # type.broadcastable is guaranteed to be a tuple, which this next
    187         # function requires
--> 188         deviceval = type_support_filter(value, type.broadcastable, False, None)
    189 
    190     try:

MemoryError: ('Error allocating 411041792 bytes of device memory (out of memory).', "you might consider using 'theano.shared(..., borrow=True)'")
1 Like

Did u get the Solution to this…I am Facing this Problem…

Hi all,

I cannot open the jupyter notebook for Lesson 1 even though I am entering the password given .

How can I get around of it?

are you using the password dl_course ?

Hello, I am just starting the course, I am using my own computer, running Ubuntu16 with the latest versions of keras, theano and cudnn. I am currently getting an error when calling vgg.fit(), which says “The following error happened while compiling the node”. I can put the full error if needed but it is quite long.

Edit: I was using Cuda 8, switching to CUDA 7.5 and an older version of cuDNN fixed the issue.

Hi All,

I am trying to setup lession1 on my mac. I have installed keras and relevant libraries by running

pip install future
pip install numpy
pip install np_utils

I am getting following error when I run

import utils; reload(utils)
from utils import plots
---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
<ipython-input-28-834d59d32016> in <module>()
----> 1 import utils; reload(utils)
      2 from utils import plots

/Users/amathu4/GitHub/fast-ai-course/deeplearning1/nbs/utils.py in <module>()
     31 from theano.tensor.signal import pool
     32 
---> 33 import keras
     34 from keras import backend as K
     35 from keras.utils.data_utils import get_file

/Users/amathu4/anaconda/lib/python2.7/site-packages/keras/__init__.py in <module>()
      1 from __future__ import absolute_import
      2 
----> 3 from . import utils
      4 from . import activations
      5 from . import applications

/Users/amathu4/anaconda/lib/python2.7/site-packages/keras/utils/__init__.py in <module>()
      1 from __future__ import absolute_import
----> 2 from . import np_utils
      3 from . import generic_utils
      4 from . import data_utils
      5 from . import io_utils

ImportError: cannot import name np_utils

Can any one please help

Great to know that you were able to fix your issue, I am also getting the same error while running the notebook. However, I had two questions:

  1. Elsewhere its mentioned that the Jupyter Notebook needs to be launched from nbs directory, however the above notebook suggests that it needs to be launched from lesson1 folder. I am struggling to understand the difference that it makes.
  2. Utils directory can be seen from within Ubuntu but not from within the directory structure shown on notebooks web page. Could that be creating an issue?

PS : I haven’t added the __init__.py file as yet in the utils directory, will try and see if it makes a difference,

To get this right I think there are a few approaches

  1. put everything in the same directory (all notebooks, the utils.py file, and the other supporting python files). In this case you wouldn’t have a lesson1 folder. You can see this structure in the following repos
  1. Create subdirectories for each lesson and utils. Add __init__.py to your utils directory and import as mentioned in the previous comment.

I choose the second option but the first option is probably the easier path forward. You can start jupyter at any directory level just make sure you modify the sys.path so that python can find your included files.

2 Likes