Hi anshaj, you mean vgg16.r5, but I just finde vgg16.h5 in http://files.fast.ai/models/, what is vgg16.r5:?
I am having trouble using ImageDataGenerator. I have used vgg.test and vgg.get_batches with class_mode=None but still i get message > Found 0 images belonging to 0 classes.
@jsa169 Even I got the same problem. Worth a separate thread to get more discussion on the same
Hi, I have the same issue. I am wondering if the Keras code was updated since the course was updated. This link:
https://keras.io/preprocessing/image/
Says: The data will be looped over (in batches) indefinitely.
I got around it by measuring the length of my results array and stopping after I have all the images. I expected I would need to trim due to extra images in the batches but it seemed to have stopped perfectly. I have a bad score so I am having issues. I don’t know if this is the source.
I will review. Thank a lot
I met lots of problem while learning lesson1, beg for help!
I’m using OS X
when I run the following code
vgg = Vgg16()
# Grab a few images at a time for training and validation.
# NB: They must be in subdirectories named based on their category
batches = vgg.get_batches(path+'train', batch_size=batch_size)
val_batches = vgg.get_batches(path+'valid', batch_size=batch_size*2)
vgg.finetune(batches)
vgg.fit(batches, val_batches, nb_epoch=1)
I met the following problem:
(Subtensor{int64}.0, Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0, Subtensor{int64}.0)
(Subtensor{int64}.0, Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0, Subtensor{int64}.0)
(Subtensor{int64}.0, Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0, Subtensor{int64}.0)
(Subtensor{int64}.0, Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0, Subtensor{int64}.0)
(Subtensor{int64}.0, Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0, Subtensor{int64}.0)
(Subtensor{int64}.0, Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0, Subtensor{int64}.0)
(Subtensor{int64}.0, Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0, Subtensor{int64}.0)
(Subtensor{int64}.0, Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0, Subtensor{int64}.0)
(Subtensor{int64}.0, Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0, Subtensor{int64}.0)
(Subtensor{int64}.0, Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0, Subtensor{int64}.0)
(Subtensor{int64}.0, Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0, Subtensor{int64}.0)
(Subtensor{int64}.0, Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0, Subtensor{int64}.0)
(Subtensor{int64}.0, Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0, Subtensor{int64}.0)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-9-2b6861506a11> in <module>()
----> 1 vgg = Vgg16()
2 # Grab a few images at a time for training and validation.
3 # NB: They must be in subdirectories named based on their category
4 batches = vgg.get_batches(path+'train', batch_size=batch_size)
5 val_batches = vgg.get_batches(path+'valid', batch_size=batch_size*2)
/Users/zhangdongyu/Desktop/Fast_ai/courses/deeplearning1/nbs/vgg16.pyc in __init__(self)
41 def __init__(self):
42 self.FILE_PATH = 'http://files.fast.ai/models/'
---> 43 self.create()
44 self.get_classes()
45
/Users/zhangdongyu/Desktop/Fast_ai/courses/deeplearning1/nbs/vgg16.pyc in create(self)
127 self.ConvBlock(3, 512)
128
--> 129 model.add(Flatten())
130 self.FCBlock()
131 self.FCBlock()
/Users/zhangdongyu/anaconda/lib/python2.7/site-packages/keras/models.pyc in add(self, layer)
330 output_shapes=[self.outputs[0]._keras_shape])
331 else:
--> 332 output_tensor = layer(self.outputs[0])
333 if isinstance(output_tensor, list):
334 raise TypeError('All layers in a Sequential model '
/Users/zhangdongyu/anaconda/lib/python2.7/site-packages/keras/engine/topology.pyc in __call__(self, x, mask)
570 if inbound_layers:
571 # This will call layer.build() if necessary.
--> 572 self.add_inbound_node(inbound_layers, node_indices, tensor_indices)
573 # Outputs were already computed when calling self.add_inbound_node.
574 outputs = self.inbound_nodes[-1].output_tensors
/Users/zhangdongyu/anaconda/lib/python2.7/site-packages/keras/engine/topology.pyc in add_inbound_node(self, inbound_layers, node_indices, tensor_indices)
633 # creating the node automatically updates self.inbound_nodes
634 # as well as outbound_nodes on inbound layers.
--> 635 Node.create_node(self, inbound_layers, node_indices, tensor_indices)
636
637 def get_output_shape_for(self, input_shape):
/Users/zhangdongyu/anaconda/lib/python2.7/site-packages/keras/engine/topology.pyc in create_node(cls, outbound_layer, inbound_layers, node_indices, tensor_indices)
168 # TODO: try to auto-infer shape
169 # if exception is raised by get_output_shape_for.
--> 170 output_shapes = to_list(outbound_layer.get_output_shape_for(input_shapes[0]))
171 else:
172 output_tensors = to_list(outbound_layer.call(input_tensors, mask=input_masks))
/Users/zhangdongyu/anaconda/lib/python2.7/site-packages/keras/layers/core.pyc in get_output_shape_for(self, input_shape)
474 raise ValueError('The shape of the input to "Flatten" '
475 'is not fully defined '
--> 476 '(got ' + str(input_shape[1:]) + '. '
477 'Make sure to pass a complete "input_shape" '
478 'or "batch_input_shape" argument to the first '
ValueError: The shape of the input to "Flatten" is not fully defined (got (0, 7, 512). Make sure to pass a complete "input_shape" or "batch_input_shape" argument to the first layer in your model.
How can I solve it?
Were you able to solve this. I am running into same issue
Hi Vladimir, Did you get an answer on your question as I have the same question?
Regards,
Sanjay
Hey guys,
One thingy I see is that most of my friends are bugged about that almost half of the first lecture is about setting up. My suggestion is to split it into two videos and rename the first part, which is about setting up, Lecture 0.5. Also if splitting of the videos would be refilmed it would also enable you to change links (from platform.ai to files.fast.ai) of the files. I’d be more than happy to help you on this one.
Theano and TensorFlow implement convolution in different ways
you have to convert the kernel before using it
I had written a post on Matrix Multiplication. It will give you an intuitive, no-math view of multiplication rules. I would love to hear your thoughts on it. https://kishorepv.github.io/Matrix-Multiplication/
from keras.models import load_model
model = load_model('my_model.h5')
See Keras documentation for details.
under the hood the implementation of VGG uses SVM(https://en.wikipedia.org/wiki/Support_vector_machine) to do the classification and regression analysis and calculate loss.
Could you please explain how SVM is related to VGG? I can’t find it in the paper.
I was really confused what you are doing in finetune with this part:
for c in batches.class_indices:
classes[batches.class_indices[c]] = c
The same thing can be achieved with just
sorted(classes, key=classes.get)
You wanted people to get confused right?
I spent like an hour trying to figure out why it wasn’t working. Thought there was some error in the code. Thanks for this.
Have you had any luck with this error? I assumed it should have run right out of the box, but I guess between the time the course first launched and today, things have changed; the script doesn’t run as expected
When I run the code under ‘Create VGG model from scratch using Keras’ , I get a No JSON could be decoded error
I actually spent quite some time installing all the necessary libraries for 2.7 as I use 3.5
Also do you have access to platform.ai. I don’t see to have access
Same issue here. conda install bcolz seems to complete the installation fine but then when I run the code it seems to still not recognize the module bcolz
Hi Im having this error on initialising Vgg16()
Dimension 1 in both shapes must be equal, but are 2 and 1000 for ‘Assign_216’ (op: ‘Assign’) with input shapes: [4096,2], [4096,1000]
I’m having some difficulty modifying lesson one to enter the Kaggle competition. I have the writing to a csv working, and I’ve trained the model on the data, but I’m unsure of how to apply that to the test data.
I put 6 test data files in the /test folder to give it a go, but I can’t figure out how to load them via vgg.get_batches.
test_batches = vgg.get_batches(path+'test', batch_size=batch_size)
print(os.listdir(path+'test'))
test_batches.filenames
Found 0 images belonging to 0 classes.
[‘5.jpg’, ‘3.jpg’, ‘6.jpg’, ‘2.jpg’, ‘4.jpg’, ‘7.jpg’]
Out[104]:
[]
I haven’t gotten there yet, but I also don’t see how I’m going to match the filename ID to the prediction just yet when .filenames
gives everything, not just what is in the current batch - may have to make an iterator which gives the file name too.