Py3 and tensorflow setup

Installing Tensorflow, Python 3, and friends

So far we’ve been using Python 2 with Theano as our backend for keras. We need to install Python 3 and Tensorflow for part 2 of our course, and make them our default python and keras backend. The approach we’ll be using here is to update our path and keras config so that python 3 and the tensorflow backend will be used from now on - even for existing projects (so you may need to make some changes to existing projects to make them continue to work). If you’d prefer to be able to easily switch back and forth between python versions, you should follow these tips instead.

We could, of course, have simply provided a new AMI so you wouldn’t have to worry about any of this, but we decided on the manual root because:

  • We don’t want you to lose your existing work, and
  • We think it’s important to learn how to manage your server yourself, so now’s a good time to start if you haven’t done this before - just reply below if you have any questions or issues!

To start, ssh into your p2 instance as we’ve done before. First we’re going to update our Linux libraries and drivers (most importantly, there’s a new Nvidia driver and CUDA version that this will install):

sudo apt update
sudo apt upgrade

Next we’re going to download and install Anaconda’s new python 3.6 distribution:

cd
cd /Downloads
wget https://repo.continuum.io/archive/Anaconda3-4.3.0-Linux-x86_64.sh
bash Anaconda3-4.3.0-Linux-x86_64.sh -b

Next we need to replace our path in our bash configuration file to point to Anaconda 3:

cd
vim .bashrc

Once vim is open, replace the anaconda 2 path with anaconda 3 by simply changing the 2 to 3 (it’ll probably be the last line of the file). As a reminder, press ‘i’ to enter insert (editing) mode, Esc when done editing, and ‘:wq’ to write the file and exit the editor. Next, reboot the instance to ensure the new drivers are loaded:

sudo shutdown -r now

…and ssh back in. Now we’re going to download and install the new version of cudnn:

cd downloads/
wget http://files.fast.ai/files/cudnn-8.0-linux-x64-v5.1.tgz
tar -zxf cudnn-8.0-linux-x64-v5.1.tgz
cd cuda/
sudo cp lib64/* /usr/local/cuda/lib64/
sudo cp include/* /usr/local/cuda/include/

Next we’re going to install the latest versions of tensorflow, bcolz, and keras as well as update all our conda packages.

pip install tensorflow-gpu
conda install bcolz
conda update --all
pip install git+git://github.com/fchollet/keras.git

Now we need to configure keras to use tensorflow as opposed to theano. This can be done with:

echo '{
	"image_dim_ordering": "tf",
	"epsilon": 1e-07,
	"floatx": "float32",
	"backend": "tensorflow"
}' > ~/.keras/keras.json

To test our configuration, launch ipython and test importing tensorflow and keras successfully:

ipython
import tensorflow
import keras

Exit with Ctrl+d. Now we’re going to create a unique password for our notebooks, so that we’re not all using the same password like we were in part 1!

python -c "from notebook.auth import passwd; print(passwd())"

Enter a password on prompt, and copy the output.
Next open up the jupyter notebook configuration file:

cd
vim .jupyter/jupyter_notebook_config.py

Scroll to the bottom and replace the previous password with the output you generated in the previous step. Finally we’re going to configure extensions and launch our notebook:

pip install jupyter_contrib_nbextensions
jupyter contrib nbextension install --user
jupyter nbextensions_configurator enable --user
cd nbs
jupyter notebook

Go to the appropriate port and test your new password, as well as testing tensorflow and keras. If everything works, you’re good to go!

12 Likes

If, like me, you are running python on your own box and are not using Anaconda ('cuz existing installation), then you might run into this: When you run jupyter notebook you can not create a python3 notebook, only python2.
This is how I alleviated the problem:

sudo apt-get remove ipython3 ipython3-notebook; sudo apt-get install pip3; sudo pip3 install ipython;
sudo -H pip install --upgrade pip
sudo apt-get install python3-pip
sudo -H pip3 install --upgrade pip
sudo -H pip3 install ipython
sudo -H pip3 install jupyter
sudo ipython kernelspec install-self; sudo ipython3 kernelspec install-self

For the rest of you, Anaconda3 may sure be a good option =:-D

See you in class

Note: I am running Ubuntu 16.04

3 Likes

which tensorflow version do we need to install? I think they released the version 1.0 in the last weeks. Will we use the 1.0 version (can you may add other version, which we should take care of)?

Yes v1.0. If you follow the steps above it will install the correct version.

If you install Anaconda into your user folder it makes life much easier IMO, and you don’t have to worry about any existing installation - the two can live together happily! :slight_smile:

2 Likes

I was able to install Python 3 along with Python 2 following the link Jeremy gave “these tips” .

One issue I had was that even after installing tensorflow, I was not able to import from IPython.
However I was able to import with out error from Python.

I tried the below and now it works fine:

change to python 3 environment using "source activate <python3 env name>"
After that:
conda install ipython

I also put a handy function inside the “.bashrc” profile to easily help me switch between theano and tensorflow.

Create 2 files keras.json.tensor and keras.json.theano respectively in ~/.keras directory in the standard format.`

Add the below function in .bashrc

gotensor() {
    rm -rf ~/.keras/keras.json
    cp ~/.keras/keras.json.tensor ~/.keras/keras.json
    cat ~/.keras/keras.json
}

gotheano() {
    rm -rf ~/.keras/keras.json
    cp ~/.keras/keras.json.theano ~/.keras/keras.json
    cat ~/.keras/keras.json
}

open a new shell.

When you want to use tensorflow just call “gotensorflow” in your python3 environment.

8 Likes

I would be careful with that, one of the great things about Anaconda is it installs optimized versions of key libraries that can be as much as 40x faster than the default versions (like Numpy for example). It also links it properly with the correct libs that have large performance improvements.

That’s a big selling point on Anaconda over using pip and manually linking things.

2 Likes

Thanks, I’ll consider it. Especially the part on speed might be interesting. I’ve had some speed issues occasionally with part I.

+1 for Anaconda. Not only is it a separate environment, you can actually create different setups within it that have specific libraries imported. I wasn’t sure about switching to it either, but after doing so for this course I’m never going back.

1 Like

It works! :wink:

Because I’m using spot instances which I set up and tear down every time, I updated my setup script to reflect the above changes:
https://github.com/jonas-pettersson/fast-ai/blob/master/scripts/install-gpu-tf.sh

Here is the description how to set up an AWS spot instance using the script:

4 Likes

For spot instances, would it be easier to have Anaconda etc installed onto an EBS volume, and then simply attach the volume to the new instance after you create it? (Which you could do in your script). For instance, you could attach the EBS volume as your home directory, which means you’ll also have your configuration changes saved automatically.

3 Likes

thanks, I will try that. I believe I still pay something for a volume per GiB even when it is not attached to an instance. but it is not so much. I will try and let you know.
if I can save the time for file transfer it might be worth it.

UPDATE: hmm… cannot detach a volume from an instance while it is running, and it is a feature of the spot instance that it cannot be stopped, just terminated. With termination the volume is gone, of course, so it seems I’m stuck with the procedure of setting it up again. Which is fine, at least for me.

UPDATE 2: I was wrong in my belief that volume from spot instance cannot be saved - there is an option to that (“DeleteOnTermination”: false,). Inspired by @slavivanov I am now working on a similar approach. However I think it is not necessary to generate JSON from text, instead I can use a JSON conf file like this:
aws ec2 request-spot-fleet --spot-fleet-request-config file://config.json
I will work on this and update when I have it ready.

This looks like a great approach: Persistent AWS Spot Instances (How to)

3 Likes

Hey everybody!

I’ve been working on migrating some of the models (and pre-trained weights) from Part 1 of this course to TensorFlow. This is mostly so I can export TensorFlow graphs that will run on mobile devices (TensorFlow has decent support for running trained models on mobile devices).

I’ve put together a Python Notebook that walks through the conversion process and some of the gotchas:

It might be useful to some of you. Especially, if you already have a trained model that you want to use with TensorFlow.

Cheers!
James

Update (Feb 27th): Sadly, something is not right in my script. Everything works when using Theano dim ordering, but the conversion to TensorFlow dim ordering is broken. Somewhere… I updated the notebook to make things clearer. Could use some help, if anyone else is interested.

6 Likes

Thanks again, I will have a closer look and report any results

UPDATE: it works very well, you can create a new spot instance, mount an existing volume and use it in the way @jeremy proposed. I will finalize, test and document the following scripts, but I post it so anyone interested can already have a look to see how I did it.


setup_aws_spot_w_remount.sh: sets up a new spot instance and executes the remount
specification.json: configuration file for creating the spot instance
remount_root.sh: remount script executed on the newly created spot instance
remove_aws_spot.sh: cancel spot request, terminate instance, and remove the (empty) default volume of the spot instance

UPDATE 2: Above scripts are now tested, corrected, and documented. For sure not everything is checked and, because of the swap root volume operation, should be used with some care. But they would anyhow be useful for someone using the approach of spot instances and mounting an existing volume to root. Further explanations directly in the scripts.

After upgrading, will I be able to run the scripts from part-1?

@sakiran It depends. The Vgg model from Part 1 doesn’t work without some modifications, and the any saved weights for convolutional layers needs some significant adjustments to work.

See my post/notebook above for more details.

There wouldn’t be any issue if I have python-2 and python-3 simultaneously. Correct?

@sakiran Python3 will be your default environment after upgrading, you won’t be using both simultaneously (it uses whichever version is in your path, in this case Python3).

You shouldn’t have any issues, other than a few particularities to Python3 syntax that should be simple to fix when you encounter them.

@jpuderer are you sure it’s working correctly? The training of the model results in a validation accuracy of 0.5000 aka random chance for a 2 class system. It doesn’t seem like the model is being optimized properly unless I’m missing something.