Python 3 Upgrade Troubleshooting (Aug 2017 edition :))

Just putting this out there in case people have problems with their upgrade to Python 3 when taking the MOOC.

  1. If you get a version `GLIBCXX_3.4.20’ not found error - try re-installing libgcc. I did it with Conda, so I needed to call this:

conda install libgcc

  1. If you get ModuleNotFoundError: No module named ‘xgboost’

Install xgboost and all the other ones

pip install xgboost
pip install gensim
pip install keras-tqdm

Or see this thread -Lesson 8 in-class

3)If you get an error from Keras - cannot import name ‘initializations’

Change this:
from keras import initializations

To this:
from keras import initializers

In the utils2.py file.

Kersas 2.0 changed the name.

I’ll post more issues/solution as I see them. Just wanted to put this out in the forums in case others are having similar issues.

7 Likes

More here:

Looks like a lot of this is the addition of Keras 2.0.

If you run into an issue of: “_obtain_input_shape() got an unexpected keyword argument 'dim_ordering”, read this post.

If you are getting yellow images in lesson 8, replace your loss function for content with the following:
loss = K.mean(metrics.mse(layer, targ))

2 Likes

Thanks for the pointers, they were all relevant to my setup.
However, now I got stuck at
x = solve_image(evaluator, iterations, x)
Which gives a rather lengthy error message, which I included (shortened).
Anyone has oberserved this as well?

InvalidArgumentError                      Traceback (most recent call last)
<ipython-input-44-d733ab4b440a> in <module>()
----> 1 x = solve_image(evaluator, iterations, x)

<ipython-input-25-d3d1b9a0479e> in solve_image(eval_obj, niter, x)
      2     for i in range(niter):
      3         x, min_val, info = fmin_l_bfgs_b(eval_obj.loss, x.flatten(),
----> 4                                          fprime=eval_obj.grads, maxfun=20)
      5         x = np.clip(x, -127,127)
      6         print('Current loss value:', min_val)

[...SNIP...]

/mounts/Users/myusername/.local/lib/python3.5/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
   1338         except KeyError:
   1339           pass
-> 1340       raise type(e)(node_def, op, message)
   1341 
   1342   def _extend_graph(self):

InvalidArgumentError: Incompatible shapes: [64] vs. [128]
	 [[Node: add_1 = Add[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](add, Mean_4)]]
	 [[Node: gradients_2/block1_conv1_1/convolution_grad/Conv2DBackpropInput/_301 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_459_gradients_2/block1_conv1_1/convolution_grad/Conv2DBackpropInput", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

Caused by op 'add_1', defined at:
  File "<string>", line 1, in <module>
  File "/usr/lib/python3/dist-packages/IPython/kernel/zmq/kernelapp.py", line 469, in main
    app.start()

[...SNIP...]

  File "/mounts/Users/myusername/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1204, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access

InvalidArgumentError (see above for traceback): Incompatible shapes: [64] vs. [128]
	 [[Node: add_1 = Add[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](add, Mean_4)]]
	 [[Node: gradients_2/block1_conv1_1/convolution_grad/Conv2DBackpropInput/_301 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_459_gradients_2/block1_conv1_1/convolution_grad/Conv2DBackpropInput", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

Hey Ben,
This error should also be fixed using “loss = K.mean(metrics.mse(layer, targ))” from grahac’s comment above, except that you need to change
metrics.mse(gram_matrix(x),gram_matrix(targ))
to
K.mean(metrics.mse(gram_matrix(x),gram_matrix(targ)))

That at least got me past the error you pasted, but now I’m receiving another error of sorts in the form of extremely high loss values:
current loss value: 24357.9316406
current loss value: 21018.1699219
current loss value: 18276.6582031
current loss value: 16108.6474609
current loss value: 14343.5166016
current loss value: 12834.8105469
current loss value: 8366.41308594
current loss value: 4700.38867188
current loss value: 4118.77929688
current loss value: 3665.37548828

Jermey’s losses ended around 5 for this problem and you can clearly see the difference in the resultant image.
I can extend the iterations out further, but I don’t think that gets to the source of the problem. I’m going to keep poking at th code, but if anyone saw similar issues and/or found a solution, please let me know!

2 Likes

what about

 12 from gensim.models import word2vec
 13 from keras.preprocessing.text import Tokenizer
---> 14 from nltk.tokenize import ToktokTokenizer, StanfordTokenizer
 15 from functools import reduce
 16 from itertools import chain

ImportError: cannot import name 'StanfordTokenizer'

go to utils.py

remove StanfordTokenizer from the origional file

and add this line

from nltk.tokenize.stanford import StanfordTokenizer

Its a NLTK 3.2.5 thing

Removing StanfordTokenizer from there and adding from nltk.tokenize.stanford import StanfordTokenizer works for me

hey @xenoproboscizoid were you ever able to solve this? I’ve been stuck on it for a while. I’m able to get my losses into the 300s, but never close to where Jeremy’s were in the lecture vids. I’m sure I’m missing something small.