Python + fastai + pytorch versions for 940MX

Hey guys,
I’m trying to set up my virtual enviroment in my laptop with 940MX. I find it easier for me to write code and debug in Pycharm rather than in Google colab (Please enlighten me otherwise).

Nvidia Geforce 940MX => Cuda 10.0

Which python + fastai + pytorch versions should I install?

I’m really losing my mind over this…

Thanks

Hi,

Can you explain what you did up to now and what problems are you getting ?

Well before you answer me, I can tell the recent Pytorch 1.6.0 and up needs a GPU with cuda capability > 6.1 (bigger than 6.1) and your GPU has cuda capability 5.0, however if you want to adventure yourself in compiling your own pytorch then will be automatic enabled for you for (cuda drivers 10.0).

This numbers is something you need keep tracking because Nvidia is making decisions to not allow people with old GPUs to use recent cuda drivers anymore. For example cuda 11.0 don’t allow people with GPUs with cuda capability 3.5 or bellow being used anymore. So even if person try to compile a Pytorch for TESLA GPUs that has cuda capability 3.5 for cuda 11.0 it will not work.

Today the most actual version of Pytorch is 1.8.1.

One thing is cuda drivers and another thing is cuda capability that every GPU has a specific version that allow them use certain cuda drivers version (i.e. cuda 9.2 up to 10.0). Pytorch uses the Cuda driver to send computations to the GPU.

Hey man,
Thank you so much for your helpful comment.
It’s only after I posted this thread, that I learnt about Computing Capability (CC) value, and thus finally understood that meaning of 5.0 (which 940M stands for).
So I needed to know what CUDA version I should install for that, but you well explained that for CC=5.0, even CUDA=11 could be useful, but then - the bottleneck will be because of the pytorch version (anything above 1.6.0). So if I understood well, I could install CUDA 11.2, but I’ll have to make sure that my pytorch is below 1.6.0.

So my question is: What pytorch should I install then? And hence, what fastai version should I install then?

I’m trying to make sure that the dependencies are working well all together.

I tried to even just run the first code here: https://colab.research.google.com/github/fastai/fastbook/blob/master/01_intro.ipynb
They worked well on the Google colab, but I need to test them on my pycharm on my laptop as well. It’s easier for me to understand how to set my enviroment this way (Because later I’ll work on another much more powerful private GPU). But on my laptop I had errors of “FakeLoaders” and “Freeze_Support()” that I could barely solve (At least the first one).

I managed to install these: (Win10)
pip + Python 3.9 + fastai 2.20 (I think, the latest) + CUDA 11.2 + Pytorch 1.8.1+cpu
But when I tried to downgrade the pytroch (even) to 1.2.0 version, it popped me installation errors. I guess that it’s because of dependencies issue.

Now I deleted everything. I guess that I need to find a way to install pytorch 1.5.0? What other versions should I have?

Thank you so much!

I think the actual version of Fastai needs pytorch 1.7.0 or above (information from Github) so in order to use pytorch for your GPU you will need to find a version compiled for Cuda Capability 5.0 or compile your own pytorch.
There is a variable on the Pytorch setup.py (for compiling “TORCH_CUDA_ARCH_LIST”) that asks about the wich compute capability you need to build. The default is 6.0 and above.

1 Like

I tried to even just run the first code here: https://colab.research.google.com/github/fastai/fastbook/blob/master/01_intro.ipynb
They worked well on the Google colab, but I need to test them on my pycharm on my laptop as well. It’s easier for me to understand how to set my enviroment this way (Because later I’ll work on another much more powerful private GPU). But on my laptop I had errors of “FakeLoaders” and “Freeze_Support()” that I could barely solve (At least the first one).

About google Collab … it works perfectly because there they has the TOP GPU.

managed to install these: (Win10)
pip + Python 3.9 + fastai 2.20 (I think, the latest) + CUDA 11.2 + Pytorch 1.8.1+cpu
But when I tried to downgrade the pytroch (even) to 1.2.0 version, it popped me installation errors. I guess that it’s because of dependencies issue.

Yes it’s because of the dependencies for sure.

If I use an older version of fastai, which one is the the most relevant, yet not so updated? (So I’d not miss important features)

Plus: On Google colab, when I type in the code, the IDE doesn’t show notes/auto completions/suggestions/alt+enter options/color… How can I work this as if it was Pycharm?
Thanks

Hi again,

I am sorry I am late to answer you because I had a problem with my Operational System and I had to refresh everything. So … About fastai really cannot advice you much if you want to work with previous version I think is more definite approach you go forward learn how to build your own pytorch to enable your GPU to work.

About the Google Collab, we need to talk about two distinct things:

  • Jupyter Notebook in general has Autocompletion;
  • PyCharm has Intellisense;

There is no way you can put Intellisense on Jupyter or Collab (Custom Jupyter from Google). Intellisense is a very complex mechanism that detect first characters you are typing and suggest you would like to write for the entire line before you even finish typing. This is even faster with Typed languages like Java or C#. But Jupyter and Collab if they ever do this will take ages to be able to work like that. So the answer you can’t do much about it.

Because Collab or Jupyter are WebApplications it may present latency to give you the autocompletion menu. While when you work in a PyCharm is a C++ program that is not related to Web Javascript code to interact with your machine so the latency is very lower compared with the browser layers of interface.

So in my opinion, the best way for you to work in your machine would be learn about how to build your own pytorch or give up using GPU and use just CPU. In this way you can pre-install the package for Pytorch (CPU) then later install the fastai more recent version. Then your Pytorch will only rely on CPU and Memory of your computer.

1 Like

One alternative also is : To spend time creating the basic algorithm in your computer (CPU mode), then implement variables in strategic points of your code that later you can switching for GPU mode.

When you see that your code be working, then you could move the entire code to Collab and switch “the variable” to GPU mode. In this way you can just use collab to train the final algorithm and you work all the time in your own machine.

Hey man,
Thank you so much for your explanation. It really enlightened me!
I’ve decided to try Google collab or Jupyter, and share my code progress so I’d easily be able to get notes from other members.

1 Like