The following message suggests there will be no further development work on Theano
yep it is!
I recently made my own DL server and installed the latest CUDA, cuDNN, Tensorflow, Keras and Theano. Then, I tried to use notebooks from the first part of the course. I was surprised to see that Theano does not work with the latest version of cuDNN and CUDA, so I had to switch to Tensorflow.
I will try reimplement all Theano models from the course in Tensorflow, which sounds like a challenge but also a great opportunity to learn.
It feels like there is a lot of effort going into development of Tensorflow and PyTorch and it feels safe to say that it is not worth studying Theano any more and one should focus on other frameworks.
Is the fast.ai course still worth doing ! I was about to start in Part 1 and noticed in the Getting Started video mention of running Keras over Theano, is it trivial to just use the notebooks with Keras over TF? or is the course doomed?
Definitely, still worth doing. You can use Keras on top of TF and it will work. In addition, there is now the second version of Part I, which uses PyTorch. So you have a choice of what framework to learn. But frameworks aside, theoretical concepts you will learn do not depend on software you are using and they are taught very well in this course.
Does the same AWS setup script for v1 work for v2?
I recently started fastai and encountered a lot of problems with the setup using theano. Eventually after a lot of trial and error I got it working, here are the notes I made that might help others
Deep learning setup for fast.ai:
- Setup aws p2.xlarge instance, with a GPU Tesla K80.
- run install script to the cuda install section: install-gpu.sh
- install cuda v8 not (default v9), sudo apt-get install cuda-8
- continue install script g++, anaconda, cudnnm theano, keras
- change ~/.theanorc device to cuda0 not gpu
- complete setup install script
- update theano, libgpuarray, and pygpu from (Montreal Institute for Learning Algorithms)
conda update -c mila-udem/label/pre theano libgpuarray pygpu
What’s the easiest way to find out what tools the v2 course uses? PyTorch, TF, Keras 2, and so on…
v2 uses the Fast AI library, which is built on top of PyTorch.
Should I use this one if I’m setting up on Ubuntu 16.04 AWS image from scratch?
OR should I go with the method suggested by @wgrimes ?
Paperspace script is for newly released Part1V2 course,
but theano stuff is for previous course Part1 V1.
I’m still swimming in the question of “what’s the easiest way to find out”.
I started a new topic for this: