Why is TensorFlow percieved as more difficult than Theano?

Having read this post http://www.fast.ai/2017/01/03/keras/ and a few other comments in various videos and sites on the net, I wonder why people consider Theano to be easier or more intuitive than TensorFlow?

So far I’ve only come across a few differences being new to both, but in TF one can use an optimizer on an arbitrary expression (AFAIK), while in Theano there is theano.grad() which is a lot more manual. Similarly there’s no need to wrap everything in a theano.function before running an expression.

We prefer Theano over TensorFlow, because Theano is more elegant and doesn’t make scope super annoying

Could someone elaborate what this means? I’m just at the beginning of lecture 9 and there’s no mention of scopes yet.

1 Like

I have the same question as well. Since now the latest TF has adapted keras, I guess it is pretty much the same now - we just use keras and choose either TF or theano as background - right?

We avoided using all the scope/context manager stuff in the course. But this is the kind of weirdness we’re trying to stay away from: https://www.tensorflow.org/programmers_guide/variable_scope

1 Like