I was just curious, do people who use AWS frequency for this type of work have both an instance with a powerful GPU (to run models on), and an instance for developing / getting ready / prototyping?
Alternatively, maybe I can just “get ready” completely locally if I have pretty recent computer with enough memory.
Yes, it’s great to have both, and run on a sample on the t2 (or better still, an m4), then once it’s all working, push your working code to git, then switch to the p2 and run it there on the full data set.
BTW the downside of getting ready locally is that your own computer may be more different to the AWS instance, so you might need to make more changes. Whereas your AWS instances can be set up with identical directory structures, installs, etc.
Is there any danger in ‘changing instance type’ p2 --> t2 to fool around in jupyter notebook and then back to p2 to train on the full data set? I might just make an image/snapshot and test it, but wanted to ask first
I apologize if this has been answered elsewhere, I tried looking around for it first.
I switch my instance type in the AWS console for just this use case. If your instance is using EBS (elastic block storage) the contents of the file system stay intact and it just looks like a reboot.
AWS recently announced elastic GPUs where you can attach a GPU as a resource to other instance types. It’s in preview and I don’t know if it will be suitable for deep learning, but it’s a feature to watch. https://aws.amazon.com/ec2/Elastic-GPUs/
to what I understand you can’t change (convert) t2 instance to p2.
You should have both.
what is possible to convert from t2 micro to other t2 types, for ex. t2 large.