I just checked, and our fastai part1v2 AMI instance for p2 also works just fine on p3!
However, the conda version of pytorch that’s installed isn’t optimized for the p3, so you need to conda remove pytorch, and then install pytorch from source using the steps on their web site (which turns out to be very easy, thanks to the pytorch team’s awesome process and docs). After that, you can pip install torchvision to get that back.
I haven’t had a chance to benchmark properly yet, but it’s looking pretty good…
Hi @jeremy, Hi all users of AWS p3, Hi all people that would like to use a AWS p3,
Just to let you informed. I asked AWS to let me use a p3 instance.
Their answer : no.
The argumentation of their answer : you have to spend (more money) on p2.xlarge and then, we will see if we let you use a p3 (and spend your money there).
The full answer from AWS :
I received an update from the Service Team and they were not able to grant the limit increase of this type due to the amount of Spend on your account.
In order to grant access to this type of instances, you would need to show that you have at least ran large instance types for a while.
I tried my very best to advocate for you as I understand that you needed the P3 instances for Fastai international program. My suggestion is that you continue to use the p2.xlarge instance that you do have access to in order to increase the spend on your account so that we can reassess the request for p3 instance types.
If you request a SPOT instance, they will let you have whatever you like. Last I checked, spot instances come from a different allocation pool and the base access level is much more permissive.