Reduce size of volume

@mcr Apparently the cost is not exactly $.90 per hour. You have to factor in the cost of the volume and reserving the IPs.

Good post explaining: How to avoid billing when using t2 and after releasing IP addresses?

So as I understand it, working with a 30GB volume vs. 128GB will save you almost $10/month.

1 Like

Hey @talldragon
Do you mind copying your AMI to EU(Ireland), I tried doing the same but didn’t have permission to copy.

Here is the direct link: https://us-west-2.console.aws.amazon.com/ec2/v2/home?region=us-west-2#Images:visibility=public-images;imageId=ami-951609ec;sort=name

(Right click on your AMI and you would see an option to Copy AMI )

Cheers

1 Like

Hi All,

I’m new in ML field and other tools mentioned by Jeremy in setup video. I wanted to get your guidance if I should go with Floydhub or follow steps mentioned here to reduce size of volume. I believe both steps are to reduce the cost learner has to pay to complete assignments.

Further, Flyodhub is offering 2 hours in trial version rather than 100 hours than it used to offer previously. So,

  1. Floydhub or
  2. Reduce volume

Also, how much time on an average will it take to complete all assignments?

Hi @vikbehal,

You can probably expect to spend around seventy hours to complete all the assignments.

I personally went with a reduced volume of 30GB and it’s worked fine for me. When I need more space I attach a larger second volume. I haven’t used Floydhub though, so I can’t comment on it.

Hi,
Is there an available P2 30GB image in WEST EU?

Thank you @z0k. I’m new to cloud environment so reducing volume would eventually mean bringing down the cost? Also, just to confirm, it took 70 hours to complete ALL assignments?

Yes the cost depends on the size of an EBS volume. It is generally around $0.10 per GB-month of provisioned storage. So a 30GB volume costs around $3/month.

I think the 70 hours figure I cited includes working through and experimenting with the lecture notebooks. Based on my first run through the MOOC, I believe it’s a good estimate for the time investment to get the most out of the course.

Hey everyone,

just starting the course today. Super exciting!

Is there any AMI I could use for P2 - 30GB, in EU (Ireland) - eu-west-1?

I’m pretty new to AWS…

Thanks in advance.

@z0k, Where do I find exact changes? I looked at what @nunb said. He mentioned use new ami that @vshets but i’m unable to find changes. Pls. guide.

I’m on windows and installing everything first time. As mentioned should I change ami from ‘ami-bc508adc’ to ‘ami-64c5cc1d’?

and change 128 to 30?

export instanceId=$(aws ec2 run-instances --image-id $ami --count 1 --instance-type $instanceType --key-name aws-key-$name --security-group-ids $securityGroupId --subnet-id $subnetId --associate-public-ip-address --block-device-mapping “[ { “DeviceName”: “/dev/sda1”, “Ebs”: { “VolumeSize”: 128, “VolumeType”: “gp2” } } ]” --query ‘Instances[0].InstanceId’ --output text)

Try AMI ‘ami-64c5cc1d’, in us-west-2. I found it in the discussion above. It was posted on June 12 so maybe it’s still available.

Unfortunately my P2 limit has only be increased to 1 for eu-west-1

will t2.xlarge instance be sufficient for the purposes of this course’s assignments?

Hi, everyone!
I am a new member here. Just starting this course today and… already found my first-- how to deploy AWS instance with cost reducement.

So, I have read this thread from the beginning until the end, and have no idea how to deploy the low cost one.
Then, I found out this blog post by Slav Ivanov. I think he also a learner on here.
I try to follow his guidelines and got stuck in the instance deployment-- always failed to bid for spot instances.

Thus, I tried my own way, by using AWS console, which apparently easier for me. (I’m using us-west-2 / Oregon)

  1. I follow the guidelines by Slav Ivanov until 1.2 Virtual Private Cloud (VPC)
  2. After this, I use AWS console to deploy my instance. Open up EC2 Dashboard, click Launch Instance
  3. Click Community AMIs >> Search “ami-64c5cc1d” (it seems this AMI has DL frameworks & Anaconda preinstalled) or “ami-bc508adc” (fast.ai default AMI) >> Select
  4. Choose p2.xlarge >> Next
  5. Network: choose the VPC that we have created by using the guideline on the blog post below.
  6. Subnet: same as #5 step
  7. I enabled “CloudWatch detail monitoring” to keep track my usage. I also enabled “Launch as EBS-optimized instance” to make sure I get a lower price >> Next
    8.In Add Storage, input your preferred size (I’m using 30GB like the others suggested on here) and set the volume type as “GP2”. Make sure to uncheck delete on termination >> Next
  8. Select existing security group >> choose default >> Next
  9. Launch
  10. it will appear a new pop up, which asking for the key pair. Select “Choose an existing key pair” >> “aws-key-fast-ai” (we have created it by following the guideline on the blog post below. If we can’t select this option, we may create a “Create a new key pair” option. However, please make sure that we save the key pair by downloading it)
  11. Check the acknowledgment >> Launch instances

Let’s wait for a while and we may start to use it when it’s ready.
Happy learning!

1 Like

Awesome!

Hi, just started with the tutorial today. Found this really helpful so thanks for that. Will try this out tomorrow morning. Also i wanted to know, we’ll have to terminate the instance after we’re done right? So how do we start it again? Do we have to follow the whole procedure from step2 again? Thanks in advance.

Hi, i tried using your method, but i got this error “Volume of size 50GB is smaller than snapshot ‘snap-2b6c1a18’, expect size >= 128GB”. I used the ROOT volume type (default). Did you add a new EBS volume type for that?

P.S it worked fine when i changed the size to 128GB.

Hi, Rishab!

I seems that the AMI that originally provided by our lecturer is required the user to create a new instances with using at least 128GB storage.
Well, I think I got the point why they required us to use that huge kind of storage, and the reason was laid on the datasets. I found some datasets from kaggle or another sources that required us to download up to 50GB, and if you are interested to play around with some other datasets, 128GB are quite an adequate one. But if you don’t interest with it and want to focus on learning from here, I think 30GB are good enough.
So, in this case, I think it would be better to use the modified one, which allow us to create an instance with smaller storage, which only asked us to create at least 30GB.

Furthermore to answer your prev. question. It would be better if you just stop the instance instead of terminating it. Literally, if you terminate it, it means that you delete it and you need to create a new one again.

Anyway, I’m using the modified one with 30GB, because the datasets that we will use on here won’t be really huge and we may find some other smaller datasets for practice actually.

1 Like

Aahh I got it. I started using it with 128 GB itself. So i think i’ll stick to that now. Oh alright. I meant stopping only but i meant that if you stop a spot instance it does get terminated and you have to start a new spot again. Thanks for the help.

Hey, can just post that script, as i am getting a wrong AMI reference error