Reduce size of volume

Haha … I can now see why you are saying that!
Any chance you believe nvcc needs to be installed? I ask because when I do import theano I get this: ERROR (theano.sandbox.cuda): nvcc compiler not found on $PATH. Check your nvcc installation and try again.
After I installed nvcc that error went away.

These lines should have installed nvcc:

wget http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_8.0.44-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu1604_8.0.44-1_amd64.deb
sudo apt-get update
sudo apt-get -y install cuda

You may however have to add cuda to your path (eg see https://www.cs.colostate.edu/~info/cuda-faq.html )

1 Like

When I:

import utils; reload(utils)
from utils import plots

I get this:

Using Theano backend.
Using gpu device 0: Tesla K80 (CNMeM is disabled, cuDNN 5103)
/home/ubuntu/anaconda2/lib/python2.7/site-packages/theano/sandbox/cuda/__init__.py:600: UserWarning: Your cuDNN version is more recent than the one Theano officially supports. If you see any problems, try updating Theano or downgrading cuDNN to version 5.
  warnings.warn(warn)

Is gpu device being used here? I am running a p2 instance.
Edit: I guess it did … because it finished the training fast … .

Yup - ‘Using gpu device 0: Tesla K80’ shows that. Congrats!

Cheers @jeremy … install_gpu script did 98% of the work though!

Your next challenge, if you choose to accept it, is to create an AMI that others can use. To do so, remove all data, .bash_history, personal files, etc from the instance, and then logout. Then login and remove .bash_history again, and remove the .ssh directory, then stop the instance. Then you can right-click on the instance in the console and choose ‘create image’. Then you should terminate your original image (since you remove the .ssh directory, you can’t login to it anymore anyway!) Once it’s finished creating, you can try to launch a new instance using that image. If that works, you can set the image sharing to ‘public’ and let everyone know the id!

You may prefer to create a whole new instance for this, in order to both ensure that everything is setup the way you want, and in order to avoid breaking your working instance… :slight_smile:

Sounds good … i am going to start working on it … I kept the storage volume to 30GB … hope that suffices for the remainder of the course? I guess if not we can download old exercise data or transfer it to S3 ($0.03 per GB)

Archiving to s3 sounds like a good plan. 30GB in use at any one time seems reasonable.

If I create this AMI and launch it with a t2 instance should it still work even though I installed all the gpu related driver software? I ask because at the moment I only have access to one p2 instance - which means I cannot test a new p2 instance with AMI. Will using the t2 instance work to test the correctness of the AMI?

I suspect it won’t. Maybe a g2 instance would work?

Oh … forgot about that. Let me give it a try.

@vshets @jeremy is this one way to reduce the following AWS cost:

$0.10 per GB-month of General Purpose SSD (gp2) provisioned storage - US West (Oregon)?

Is it possible to control this cost by deleted older data sets? like cats and dogs may be?

No. The cost is based on the size of your provisioned volume - it doesn’t matter how full it is.

Ok @jeremy i will need to create and install a lower size AMI? I will need to redownload all data and scripts I am working on right…

You can just use the AMI that @vshets created :slight_smile: He announced it on Slack and I sent a notification about it to @channel , so hopefully you can find it there.

1 Like

Will do… Thanks!

Thanks @vshets :slight_smile:

@vshets - I used your AMI and created a new instance today. It has been working pretty well till now. Hopefully, the costs will be under control now!
Thanks a lot :thumbsup:

Awesome! Pls make sure you deleted your old volume (128 GB) - otherwise you will be paying for that and new volume.

Sure, that was the first thing I had done.

Oops, now when I opened up to verify the new volume, it shows 128 GiB … I am wondering what went wrong… :worried:

Here’s what I did:
In the setup_p2.sh, I changed the AMI to ami-9c54f4fc and VolumeSize to 30.
Did I miss something?

That should in theory work. Any chance you want to start all over as here http://wiki.fast.ai/index.php/AWS_install#Starting_Over