I’ve been attempting to work through the first lesson. I have an older Asus gaming laptop which has a 660M
CPU. Going through the many posts on the forums I’ve noticed that pytorch does not support GPUs lower than a compute compatibility of 5.0. The 660M is 3.0.
So, I’ve installed the fastai environment using the fastai-cpu yml file.
That being said, what do I do with the two python statements:
and the subsequent statements which actually trigger the GPU warning:
data = ImageClassifierData.from_paths(PATH, tfms=tfms_from_model(arch, sz))
learn = ConvLearner.pretrained(arch, data, precompute=True)
The pytorch package which is currently install is pytorch-0.3.1-py36_cuda80.
There are references to removing the currently installed pytorch and installing
the pytorch-cpu as follows:
conda uninstall pytorch
conda install -c peterjc123 pytorch-cpu
and then doing a
conda env update
Would this be correct?
There are other solutions which get pytorch source and compile it
so that it could use the GPU but I’m not sure if it would work
on this machine with the low compatibility of 3.0.
Are there any other things that I should do to get this running?
I could run this on one my Linux boxes (preferred),
but I thought I would try the windows laptop with Nvidia card.
I just don’t want to spend another 1/2 day trying to make sure the environment is correct.
I just don’t have the bucks to use aws or to new buy hardware.
I have extremely limited resources at this time.
Have you tried using Google colab. I have ran the first lesson on there with the free gpu enabled. Works great
Hi Gavin. No. However I will check it out.
I have run a number models on this laptop with no issues, it just takes a while.
The issue is that lesson 1 uses pytorch-cuda library. There is a pythorch-cpu
but I need to know how to change the code, because the errors come from
the code after “arch=resnet34” where it calls pytorch. I don’t know the inner workings
and at the moment I don’t have the time to wade through various pieces of code.
Tends to take the fun out of the high level learning.
This is the error that is returned after running the code learn.fit(0,1.2):
Found GPU0 GeForce GTX 660M which is of cuda capability 3.0.
PyTorch no longer supports this GPU because it is too old.
warnings.warn(old_gpu_warn % (d, name, major, capability))
RuntimeError Traceback (most recent call last)
2 data = ImageClassifierData.from_paths(PATH, tfms=tfms_from_model(arch, sz))
----> 3 learn = ConvLearner.pretrained(arch, data, precompute=True)
4 learn.fit(0.01, 2)
Yeah, I got the same thing recently on my DL laptop – using a GTX 870M. I guess downgrading to an older version of pytorch would do it, or you can check out the pytorch forums.
CPU is fine though, so it doesn’t affect testing small samples on a normal laptop.
Yes. even the 870M is out of compliance.
I have two options, use pytorch-cpu (i.e. possibly go through the procedure I mentioned
above) or compile pytorch from source as outline in another forum post.
How do I get the code to work using the cpu only option.
I followed my own advice. I just had to restart anaconda.
It is running! May take a while but it is working.
Thank you for your support. I will most definitely check out Google Colab.
Despite the fact that this is taking ages on this laptop, this stuff is fun. Jeremy’s videos are great. He has a way to explain this stuff which makes it very easy. I like this method because I recall this was similar to the way I used to learn things so quickly many years ago. When I began to look a a number of machine learning courses they all seemed to take the bottom up approach and I would get bogged down with the math losing the high level perspective of how everything was related. Jeremy mentioned in the first video that this is the way one learned music. Yes, that’s why this feels so natural to me. I used to be a classical musician and this is how I used to learn things as a classical musician. This just feels so good! I just want to do this stuff all day long. This is cool! He adds so many small simple things which creates so many light bulb moments. I like the approach of just doing this stuff at such a high level yet knowing there is so much power and complexity in the underlying algorithms. Thank you.
Hi Gavin. I’m going to be testing the Google Colab environment today with the first lesson.
Having no usable GPU locally is frustrating. The waiting is frustrating.
Again thank you for reminding me of that option!