My hero
Hey,
Iām taking this independently, but really having trouble figuring this piece out. Iāve tried tinkering with the epoch and learning rates to no avail. I feel like I might be overlooking something simple. Iām able to get to about the 53rd percentile score, but canāt get any higher than that. Does anyone have any suggestions?
Try tweaking the learning rates. Youāre currently in a local minima. First increase the learning rate to get out of there and later try to reduce the learning rate when youāre overshooting.
Thanks Manoj! I actually ended up solving it by clipping by avoiding the extreme probability predictions of zero and one. With the right bounds, I was able to get into the top 18% changing nothing else!
I thought the entire idea of Keras is to be augnostic to backend. How come backed matters here?
Submitted to the kaggle with around 45 percentile.
I had a weird bug when downloaded the Kaggle data. The test data contained two less images. I debugged by printing the number of images in the folders. I re-downloaded the test data and verified the test folder had 12,500 images before generating the predictions.
Time to tinker with similar classification problems.
Hello,
I have just began with the course and was trying to get my hands on the dogs vs cats dataset. But as mentioned in the to-do list for the 1st week, we need to arrange the data downloaded from Kaggle according to what professor did in the class ie: sample and valid folders separately for testing our model on small dataset. Can anyone tell me how to we segregate our files which we downloaded from Kaggle according to the professors file structue?
If you search this thread you will find answers about how to proceed
@carlosdeep not able to find any links. I would appreciate some help regarding the segregation of data downloaded from Kaggle.
@sahilk1610
Below you will find some links that might help. I just got the first 2 from my research
In fact, if am not wrong, the start of Lesson 1 in notebook there is a snippet code that will create the entire directory structure to run the lesson. But it is important understand how it works.
Fix for Keras 2.0 is making these changes to utils.py
diff --git a/deeplearning1/nbs/utils.py b/deeplearning1/nbs/utils.py
index 3abeed8..1da6bd7 100755
--- a/deeplearning1/nbs/utils.py
+++ b/deeplearning1/nbs/utils.py
@@ -39,10 +39,16 @@ from keras.models import Sequential, Model
from keras.layers import Input, Embedding, Reshape, merge, LSTM, Bidirectional
from keras.layers import TimeDistributed, Activation, SimpleRNN, GRU
from keras.layers.core import Flatten, Dense, Dropout, Lambda
-from keras.regularizers import l2, activity_l2, l1, activity_l1
+try:
+ from keras.regularizers import l2, activity_l2, l1, activity_l1
+except ImportError:
+ from keras.regularizers import l2, l1
from keras.layers.normalization import BatchNormalization
from keras.optimizers import SGD, RMSprop, Adam
-from keras.utils.layer_utils import layer_from_config
+try:
+ from keras.utils.layer_utils import layer_from_config
+except ImportError:
+ from keras.layers import deserialize as layer_from_config
from keras.metrics import categorical_crossentropy, categorical_accuracy
from keras.layers.convolutional import *
from keras.preprocessing import image, sequence
@@ -260,4 +266,3 @@ class MixIterator(object):
n0 = np.concatenate([n[0] for n in nexts])
n1 = np.concatenate([n[1] for n in nexts])
return (n0, n1)
This is awesome - thanks! I had made my own in a notebook, but yours is much more in-depth.
Iām having some trouble getting the history from fit_generator.
Iām using the fit() method from vggg16.py, and Iād like to be able to record the accuracy as I fit the model using different batch sizes (since this strikes me as the last parameter to be optimized).
I edited vgg16.py to import History from keras.callbacks. I then edited the fit() method to (changes highlighted):
def fit(self, batches, val_batches, nb_epoch=1):
return self.model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=nb_epoch, validation_data=val_batches, nb_val_samples=val_batches.nb_sample, callbacks=[history])
When I call this in my main notebook (having also imported keras.callbacks.History there), using
history = History()
history = vgg.fit(batches_train, batches_valid)
I get the following error:
AttributeError: āHistoryā object has no attribute āHistoryā
Iāve played around with this to no avail; any help is appreciated!
I AM ALSO GETTTING THE SAME ERROR ā¦BELOW IS MY KERAS.JSON FILE
{
āimage_dim_orderingā: āthā,
āepsilonā: 1e-07,
āfloatxā: āfloat32ā,
ābackendā: ātheanoā
}
I am Python 2.7 & Keras is 2.0.2 & Theano is 0.9.0
ValueError: The shape of the input to āFlattenā is not fully defined (got (0, 7, 512). Make sure to pass a complete āinput_shapeā or ābatch_input_shapeā argument to the first layer in your model.
HOW Can I Solve thisā¦
@sahilk1610 Keep looking carefully in this thread, Jeremy provides a link to a script written by a class member.
@akshaylamba: welcome to 80% of learning Keras. Meaning, figuring out dimensions.
From the documentation, thereās an example:
# as first layer in a sequential model:
model = Sequential()
model.add(Dense(32, input_shape=(16,)))
# now the model will take as input arrays of shape (*, 16)
# and output arrays of shape (*, 32)
The important part is input_shape=(16,). Keras needs to know what the dimensions are, that you are going to feed it. How else could it make an appropriately sized matrix of weights for you to fit?
If you get the error the shape of the input to āFlattenā is not fully defined (got (0, 7, 512). Make sure to pass a complete āinput_shapeā or ābatch_input_shapeā argument to the first layer in your model., then itās somewhat clear that you havenāt defined this properly in some way or another.
Hope this helps.
PS all caps in posts is usually interpreted as yelling on forums. Your problem is not that badā¦so thereās no need to shout.
Hello,
I have the identical scenario. Were able to resolve this, if so how?
Thanks in advance for your thoughts.
Best regards,
Bob
It appears that youāre using Keras 2.0 whereas the notebook assumes Keras 1. You could reinstall the previous version of Keras, or you could make the necessary modifications to the notebook by referring to this post: