Note: The current “increase limits” documentation apply to EC2 and do not work with Sagemaker. It will be updated once someone verifies the correct procedure.
Note that this is a forum wiki thread, so you all can edit this post to add/change/organize info to help make it better! To edit, click on the little pencil icon at the bottom of this post. Here’s a pic of what to look for:
In running the course v2, there was a stage asking to set the kernel to conda_fastai. is this the case here too ? I couldn’t find the option in the dropdown.
Should we really have nothing in the “start notebook” configuration ?
I think that’s specific to the in-person version of the course sorry mate. I can remove that part of my post if it isn’t relevant to a larger audience.
One small note - that looks like instructions for requesting an EC2 service limit increase, as opposed to SageMaker specifically. I wonder if internally AWS treats the p2.xlarge instances differently to the ml.p2.xlarge?
Done. (PR)
I still can’t consistently get the conda_fastai to appear as a kernel option.
in the v2 version, the setup instructions included all the setup in the “start” stage of the notebook, going to cloudwatch logs would allow you to see when the startup script completed.
currently, I’ve been able to get conda_fastai once (after waiting about 15 minutes) but have not been able to replicate that success
(Thanks)
it all works out fine, and i get the “Python 3” kernel that can import fastai.
if i try that in the “start” script (in the notebook config), it doesn’t work,
if instead of activate I try
/home/ec2-user/anaconda3/bin/activate
the failure is silent (otherwise it complains about finding activate)
I’ve sent a PR to update the docs, but before you change, can you get the original instructions to work once you’ve stopped the instance and then started it again?
(the point of failure is the point when you import fastai on the restarted notebook.)