Lesson 5 discussion

Has anyone ran into this issue. When I tried fitting the model, I ended up getting this error.

ImportError: (‘The following error happened while compiling the node’, Elemwise{Composite{EQ(i0, RoundHalfToEven(i1))}}(dense_2_target, Elemwise{Composite{scalar_sigmoid((i0 + i1))}}[(0, 0)].0), ‘\n’, ‘DLL load failed: The specified procedure could not be found.’, ‘[Elemwise{Composite{EQ(i0, RoundHalfToEven(i1))}}(dense_2_target, <TensorType(float32, matrix)>)]’)

I never had any issues in the past. Any idea what happened?

Is there a way to assign more weight to some specific word filters than just taking word filter randomly? If yes, would it be a smart choice, or the CNN will learn to capture the right ‘weight’ of words in order to predict the label (i.e. positive vs negative reviews). Think of that as pointing the model to pay more attention towards specific words used together, as they reveal the meaning of the review ( think of a model trying to label positive versus negative reviews) more than all the other words of the paragraph.

My intuition is that adding weights ‘manually’ could be useful when these words, for whatever reason, are not so common in the reviews we are trying to learn. Thus, the model cannot learn the pattern. Yet, we know that if these words are written together they have a pos/neg meaning.

For example, there is a phrase in Italian meaning, more or less, ‘going well’. If you take a pre-trained word vector model, it’s hard to associate them to negative feeling. However, often it is used in a sarcastic way meaning the opposite of ‘going well’.

Would a CNN learn this?

I believe I found a mistake/typo on lesson 5 notebook , in the Multi-size CNN.
When constructing the ‘graph’ model (multi CNN) - its Input layer shape is written as

graph_in = Input ((vocab_size, 50)) where vocab_size is 5000 (when its need to be 500):

This graph model is than used in the sequential model after the embedding layer that have output of (500,50):

so to fix that i would change the graph model Input layer to be:
graph_in = Input ((seq_len, 50)) which is the output size for the embedding layer.

I’m having some trouble instantiating the Vgg16BN() class and getting the error:

Unable to open file (File signature not found)

From what I’ve read the vgg16_bn.h5 file might be corrupt or not in the correct format and I’m currently downloading that file from http://files.fast.ai/models/.

Does anyone know of a resolution for this? I’d really like to use batch normalization with vgg16 model.

I’m really proud of this result on the IMDB sentiment analysis challenge:

The model I used to achieve this looks like this:

With model_8 just being the graph model that Ben Bowles showed in his blog post.

I wondered this as well and found that @jeremy has answered this in another thread. Why do we divide the embedding by 3?

Thank You @johnlu

bobby,

I had the same error when trying to run this line from the lesson3 notebook:
load_fc_weights_from_vgg16bn(bn_model)

After some searching, I (re)discovered that the vgg16_bn.h5 was stored in my ~/.keras/models directory. An ls -lh command revealed that it was only 63K in size whereas the oft-used vgg16.h5 (ie sans bn) is 528M.

ubuntu@ip-10-0-0-9:~/.keras/models$ ls -lh
total 528M
-rw-rw-r-- 1 ubuntu ubuntu 35K Jun 18 18:26 imagenet_class_index.json
-rw-rw-r-- 1 ubuntu ubuntu 63K Jun 18 18:04 vgg16_bn.h5
-rw-rw-r-- 1 ubuntu ubuntu 528M Jun 18 18:26 vgg16.h5

I thus wgot the file again using the link you provided and that above line from lesson 3 now runs without error for me.

HTH,
JP

1 Like

Dose any one know why no batchnorm is used in all lesson 5(the notebook) 's model. Does it helps to use batchnorm?

Awesome observation! That fixed my error as well. Appreciate it, jp_beaudry.

Hey everyone, I’m trying to re-implement Dogs and Cats using the functional API, but with little success…

I set up batches as earlier in the course, i.e.

batch_size=64
batches = get_batches(train_path, batch_size=batch_size)
val_batches = get_batches(valid_path, batch_size=batch_size*2)

Then built a model as per the functional API, and successfully loaded up the weights from vgg16.h5. But now when I try to run:

model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=1, validation_data=val_batches, nb_val_samples=val_batches.nb_sample)

I get the error:

Exception: Error when checking model input: expected input_4 to have shape (None, 3, 244, 244) but got array with shape (64, 3, 224, 224)

(As if the generator isn’t recognising the batch dimension?)

Any help on this would be massively appreciated. (The capacity to identify cats from dogs has become rather central to my sense of self worth over the past month…)

@idano did you get an answer on this one? I would have expected this to throw an error but it looks like it works.

How to deal with this error?
Thanks

I have the same issue. I am using python 3.4, when I install Theano==0.9 I get error like this.

Exception: Compilation failed (return status=1): g++.exe: error: Chan\theano\compiledir_Windows-10-10.0.15063-Intel64_Fa. g++.exe: error: Chan\theano\compiledir_Windows-10-10.0.15063-Intel64_Family_6_Model_78_Stepping_3_GenuineIntel-3.4.5-6. lazylinker_ext\mod.cpp: No such file or directory

In the sentiment example, why is setting all words to a specific value, 5000, better than just dropping those rare words altogether?

Hi all,

Can someone please explain why in the create_emb function we divide the emb matrix by 3 at the end?

Cheers,

Marco

@marcohs
This is explained in this post:

1 Like

Lesson 5 is an amazing introduction to sentiment analysis. I tried to game the system by predicting the sentiment of a sarcastic review. Of course it failed (in fact it got better score than a truly honest positive review). Has anyone tried to train against sarcasm? Is it even possible or that’s A.I. 2.0?

phrase = np.array([], dtype="int64")

np.append(phrase, [1.])

textphrase = 'yeah sure, you should trust the reviews, by all means, this is an amazing movie, come and enjoy :/ NOOOOT'

for o in textphrase.split(' '):
    if o in ids:
        phrase = np.append(phrase, ids[o])

padded_phrase = sequence.pad_sequences([phrase], maxlen=seq_len, value=0)

​conv1.predict(padded_phrase, True)

output ---> array([[ 0.942]], dtype=float32)

I completely agree. Great catch, IMO.

The Embedding layer is based on an output tensor of size

(None, 500, 50)

as you pointed out. This fits properly with the normalized sequence length (500) and the dimension of embedding (50) attached to each sentence.

The size of the vocabulary is only relevant to the Embedding layer to understand the range of integers used to represent the inputs fed to that layer (i.e. the result of the word2idx).

The input size of the convolutional layers however should match the length of each sentence. The filters (3,4,5) are applied to the tensor of latent factors (500x50).

While the size of the resulting matrices are ten fold in the class’ notebook vs. what they are supposed to be, the number of resulting parameters, however, does not change : it’s primarily dependent on the size of filters.

Bottom line is that it still works but it’s probably not as efficient (takes longer to train per epoch) and the results are probably slightly more “noise” prone (therefore taking more epoch to reach the same accuracy).

@Jeremy / @Rachel : if you agree with @idano 's assessment, what’s the best way to correct this ? Pull request ?

hey,

I believe you can solve this by change the line
if word and ....

to this:

if word in words and re.match(r"^[a-zA-Z0-9\-]*$", word):

1 Like