Is it sensible to have both a T2 and P2 instance?

I was just curious, do people who use AWS frequency for this type of work have both an instance with a powerful GPU (to run models on), and an instance for developing / getting ready / prototyping?

Alternatively, maybe I can just “get ready” completely locally if I have pretty recent computer with enough memory.

Yes, it’s great to have both, and run on a sample on the t2 (or better still, an m4), then once it’s all working, push your working code to git, then switch to the p2 and run it there on the full data set.

BTW the downside of getting ready locally is that your own computer may be more different to the AWS instance, so you might need to make more changes. Whereas your AWS instances can be set up with identical directory structures, installs, etc.

1 Like

@Jeremy, but if I have enough power to run the tasks locally, why need I care about differences with AWS instances if I can do all locally?

You don’t

Is there any danger in ‘changing instance type’ p2 --> t2 to fool around in jupyter notebook and then back to p2 to train on the full data set? I might just make an image/snapshot and test it, but wanted to ask first

I apologize if this has been answered elsewhere, I tried looking around for it first.

1 Like

I originally set up the t2 and the p2, but in the end I think I would have preferred just to use the p2.

Remember that every time you start and stop either instance it is rounded up to the next hour. So switching back and forth can get costly.

The biggest cost for me was actually the elastic block storage.

So the saving on only having half the EBS would be greater than the not nearly double p2 run times.

Of course I didn’t have a lot of linux, bash, etc. experience, so setting everything up twice did have a learning benefit for me…

YMMV.

1 Like

I switch my instance type in the AWS console for just this use case. If your instance is using EBS (elastic block storage) the contents of the file system stay intact and it just looks like a reboot.

AWS recently announced elastic GPUs where you can attach a GPU as a resource to other instance types. It’s in preview and I don’t know if it will be suitable for deep learning, but it’s a feature to watch.
https://aws.amazon.com/ec2/Elastic-GPUs/

cheers,
Dennis

to what I understand you can’t change (convert) t2 instance to p2.
You should have both.
what is possible to convert from t2 micro to other t2 types, for ex. t2 large.