I’m new in ML field and other tools mentioned by Jeremy in setup video. I wanted to get your guidance if I should go with Floydhub or follow steps mentioned here to reduce size of volume. I believe both steps are to reduce the cost learner has to pay to complete assignments.
Further, Flyodhub is offering 2 hours in trial version rather than 100 hours than it used to offer previously. So,
Floydhub or
Reduce volume
Also, how much time on an average will it take to complete all assignments?
You can probably expect to spend around seventy hours to complete all the assignments.
I personally went with a reduced volume of 30GB and it’s worked fine for me. When I need more space I attach a larger second volume. I haven’t used Floydhub though, so I can’t comment on it.
Thank you @z0k. I’m new to cloud environment so reducing volume would eventually mean bringing down the cost? Also, just to confirm, it took 70 hours to complete ALL assignments?
I think the 70 hours figure I cited includes working through and experimenting with the lecture notebooks. Based on my first run through the MOOC, I believe it’s a good estimate for the time investment to get the most out of the course.
Hi, everyone!
I am a new member here. Just starting this course today and… already found my first-- how to deploy AWS instance with cost reducement.
So, I have read this thread from the beginning until the end, and have no idea how to deploy the low cost one.
Then, I found out this blog post by Slav Ivanov. I think he also a learner on here.
I try to follow his guidelines and got stuck in the instance deployment-- always failed to bid for spot instances.
Thus, I tried my own way, by using AWS console, which apparently easier for me. (I’m using us-west-2 / Oregon)
I follow the guidelines by Slav Ivanov until 1.2 Virtual Private Cloud (VPC)
After this, I use AWS console to deploy my instance. Open up EC2 Dashboard, click Launch Instance
Click Community AMIs >> Search “ami-64c5cc1d” (it seems this AMI has DL frameworks & Anaconda preinstalled) or “ami-bc508adc” (fast.ai default AMI) >> Select
Choose p2.xlarge >> Next
Network: choose the VPC that we have created by using the guideline on the blog post below.
Subnet: same as #5 step
I enabled “CloudWatch detail monitoring” to keep track my usage. I also enabled “Launch as EBS-optimized instance” to make sure I get a lower price >> Next 8.In Add Storage, input your preferred size (I’m using 30GB like the others suggested on here) and set the volume type as “GP2”. Make sure to uncheck delete on termination >> Next
Select existing security group >> choose default >> Next
Launch
it will appear a new pop up, which asking for the key pair. Select “Choose an existing key pair” >> “aws-key-fast-ai” (we have created it by following the guideline on the blog post below. If we can’t select this option, we may create a “Create a new key pair” option. However, please make sure that we save the key pair by downloading it)
Check the acknowledgment >> Launch instances
Let’s wait for a while and we may start to use it when it’s ready.
Happy learning!
Hi, just started with the tutorial today. Found this really helpful so thanks for that. Will try this out tomorrow morning. Also i wanted to know, we’ll have to terminate the instance after we’re done right? So how do we start it again? Do we have to follow the whole procedure from step2 again? Thanks in advance.
Hi, i tried using your method, but i got this error “Volume of size 50GB is smaller than snapshot ‘snap-2b6c1a18’, expect size >= 128GB”. I used the ROOT volume type (default). Did you add a new EBS volume type for that?
P.S it worked fine when i changed the size to 128GB.
I seems that the AMI that originally provided by our lecturer is required the user to create a new instances with using at least 128GB storage.
Well, I think I got the point why they required us to use that huge kind of storage, and the reason was laid on the datasets. I found some datasets from kaggle or another sources that required us to download up to 50GB, and if you are interested to play around with some other datasets, 128GB are quite an adequate one. But if you don’t interest with it and want to focus on learning from here, I think 30GB are good enough.
So, in this case, I think it would be better to use the modified one, which allow us to create an instance with smaller storage, which only asked us to create at least 30GB.
Furthermore to answer your prev. question. It would be better if you just stop the instance instead of terminating it. Literally, if you terminate it, it means that you delete it and you need to create a new one again.
Anyway, I’m using the modified one with 30GB, because the datasets that we will use on here won’t be really huge and we may find some other smaller datasets for practice actually.
Aahh I got it. I started using it with 128 GB itself. So i think i’ll stick to that now. Oh alright. I meant stopping only but i meant that if you stop a spot instance it does get terminated and you have to start a new spot again. Thanks for the help.