Fastai on windows. How it worked for me

EDIT:
The method proposed below (by Andrew Chisholm) worked for me. And is a much better approach then the one I initially took. Additionally turns out that using that approach allows me to train (using the GPU) while commuting. The GPU is throttled, so it’s slower then when in the office. However still much quicker then when using the CPU.


OLD POST:

Okay my first time on this forum, but as Jeremy advised to use it a lot here goes:

My reason
I’ve got a laptop containing a 1050 with primary boot being windows and secondary boot linux (arch). I started the course on the arch boot, but I ran into a major issue while in commute when i’m dependent on the battery. As soon as a training session was started learn.fit_one_cycle(1), the laptop would power down. This is, I believe, due to the high power consumption of the GPU. Thinking better of trying to implement some form of gpu throttling on linux as training while traveling doesn’t really make sense anyway. I still though I’d give it a go on windows 10. Scouting this forum and others I failed to find one that seemed to work for me. The post by Jeremy on it I believe was meant for version 0.7, and gave me an error on import fastai: ImportError: cannot import name ‘as_tensor’.

My solution
However the following, very easy, approach worked for me:

  1. Install Ubuntu
  2. While in ubuntu terminal: pip install fastai.

That’s it. No further needs for simlinks or anything. I do have to mention however that when I had previously installed Ubuntu and am not sure what has been installed in the mean time, I do remember the following packages: nvidia-driver-390.

Some results
Having said that using any of the cloud services will probably help a lot during the course. A single cycle for me during lesson1 from the 2019 course took roughly 23 minutes versus <30 seconds in the lecturers, while using external power supply. Also at the end I had a loss of ~0.071, opposed to ~0.061 in the lecturers. I’m not sure where this difference came from so if someone knows? Reducing batch size to 16 lowered the time to ~ 17 minutes per cycle.

When switching to battery the estimated time increased to slightly over an hour, which I didn’t allow to finish. Also because I’m not sure my battery would actually hold out that long. Still hopes this may help some others.

Hi !

About the difference between the time it takes between you and Jeremy, it’s because of your GPU cards. You’re on the GTX 1050, while I believe Jeremy is using a 1080Ti during the lectures, which is the best card of that generation.
The difference in final loss is not due to the different cards. There’s an amount of randomness in the training so the results vary a bit from one times to another. That randomness mainly comes from the random transforms and the batches, as far as I know.

Finally about the skyrocketing time while on battery power, it’s probably because when on battery your laptop automatically throttle the GPU to save energy.

Most importantly: GPUs do not work when using the version of Linux built in to Windows (“Windows Subsystem for Linux”). So you are actually doing everything on your CPU.

3 Likes

Maybe I misunderstood but I believe kvdeurzen is working on Ubuntu, as he didn’t get fastai to work on Windows.

Interesting to know that you can’t use a GPU on windows !

Ah great that explains a lot, certainly the huge increase in training time. I’ll spend some more time to see if I can fix it on Conda. In the meantime this will allow me to follow along with the lectures.

Oh so I misunderstood ? You’re running fastai on Windows ?

1 Like

I’m sorry if I was unclear. I am indeed running it on windows on a local notebook. Getting everything working was a breeze on arch linux following the basic steps (also showing much better training times, can’t remember what exactly).

I do understand that trying this on windows is probably not the best way forward (specially while I’m already running linux as well), but it may have some benefits for some. For me the sole reason was the laptop crashing while using the GPU without power connection. The fact that on ubuntu ON windows the GPU isn’t used explains why that was solved, but beats the purpose :slight_smile:.

Related to gpusupport for windows subsystem for linux. No development planned at the moment, but a much requested feature

1 Like

I’m using fast.ai 1.0 on a Surface Book 2 with Windows 10 (NVIDIA GTX 1060) and managed to get the GPU working after hours of messing around. I could not get it to work with the conda env and had to use pip in the end. I won’t go into all the issues I hit, but the main issue was the torch 1.0 dependency wouldn’t install.

Key things to get it working was to completely uninstall an older copy of Anaconda and install the latest, ensuring that it was running Python 3.7.
Then from the Anaconda prompt running:
pip install https://download.pytorch.org/whl/cu90/torch-1.0.0-cp37-cp37m-win_amd64.whl
pip install torchvision

To get Pytorch 1.0 for CUDA 9.0 installed.
After that running pip install fastai works like a charm and I can run most of course-v3 (with a couple of bug fixes to the path regex in lesson 1).

6 Likes

Hi Chiz,

Your approach worked for me :+1:. THanks for sharing. I’ve updfated the original post to refer to your reply.

1 Like

Those should be fixed in master (and next release) btw.

1 Like

Hi, I spent a fair bit of time getting course-v3 running on Win10 laptop. Please refer to Set up course v3 on Windows.

Adding the link to this thread as well so other can find it.

2 Likes

I have windows 10 with GTX 1070. I installed everything like Chiz suggested but when I when run lesson 1 learn.fit_one_cycle(4), it only used CPU. I have CUDA and CUDNN installed and verified. Thanks in advance.

Thank you for your concise instructions! I have windows 10 with GTX 1070. I installed everything like Chiz suggested but when I when run lesson 1 learn.fit_one_cycle(4) , it only used CPU. I have CUDA and CUDNN installed and verified. Have you run into the same issue?

@treyqi I did have that issue at one stage not sure what the root cause was, once I uninstalled everything and did the above it worked. It’s important that you are running Python 3.7 in your conda environment also.

Thanks a lot for your quick reply. I confirmed I have used python 3.7 but still I can run on CPU. I think it may related to num_workers = 0 I have to set in ImageDataBunch.from_name_re().

It’s really ambitious to run these lessons on a battery. I’m on a plugged in rig. I don’t use Ubuntu or any other environment. Straight to Windows. My GPU runs, but I had difficulty convincing myself it does. There is a Windows app called specy that shows the temp for the graphics gpu going from a little over room temp (30C) to 80C, which is around 175F (when I hit the “learn…” line of code). I don’t know how Task Manager works and it never shows GPU utilization over 29%. Even when CPU is 100%
I nicknamed my rig Wimpy. It’s an intel 5 and a 1030 gt (only 2GB mem). I’ve been learning a lot about the batch size and learning rate parameters in order to not blow up for lack of memory.
My diagnosis is that the state of the art for batteries is not where you need it to be yet. Get a plug in at home. When you hit the long running bits of code, go get a beer.

1 Like