Fastai v1 install issues thread

This has helped me overcome my install issue in my local Ubuntu/RTX 5000 setup. All working now!

1 Like

Saved me as well on Ubuntu 18.04, I kept getting older version of fastAI until I downgrade my Anaconda version.

1 Like

Hi everyone,
I am a beginner in fastai course. I have recently started this course. Therefore, I am getting my system ready to implement lessons. I am using Windows 10 and Anaconda Prompt to install fastai v1 dependent packages. I was following this link But for some reason, I am unable to execute this command(attached picture). However, I have tried many versions of it to get it done.

If anyone can help me in this regard, I will be very thankful to you.

Regards
Ayesha Sarwar

You will need to run Windows 10 in a VBox environment from the Linux distro of your choice. You will not be able to access CUDA from within a virtual Linux machine. Having just spent a week tweaking a gamerā€™s laptop into a lean mean Linux learning machine, I can help you get your CUDA up and running from there. I am currently using Manjaro, which is an arch Linux variant, but I can also help you with a debian distro.

1 Like

Hi @scullycasey thanks for your post.
My request having a Linux VM on Windows host was due the fact Iā€™m using 3D applications available mostly on Windows.
What Iā€™m trying to do in the last weeks is to switch completely to Linux since I discovered only one of them itā€™s really Windows only with no Linux equivalent, and Iā€™m testing different possible substitute.
Installing Windows on a Linux VM will mean stealing resources needed from 3D or ML applications I want to make work at the same time, so I prefer the full Linux path right now.
If I will fail to make the switch (in the meantime messing up my Linux upgrading to KDE Neon in an unsafe unsupported way) for sure will explore the Windows VM path, so thanks for your offer.

Davide, Good call on the decision to change your primary OS from Windows to Linux. You will be a much happier programmer! Cheers, and good luck.

If you run linux it is technically possible to run a virtualised windows that has access to one of your GPUs. You cannot share, at least not for machine learning, there is some support for 3D acceleration in virtualised sytems using the host GPU, but I would not expect it to work for 3D rendering, itā€™s just an emulated 3D video driver not full GPU compute.
But there is whatā€™s called PCIe passthrough where the host uses one GPU (or runs headless) and full control of the other GPU is given to the guest. Itā€™s used in cloud GPU cases so is well-developed technology, itā€™s just not well supported at the consumer level. So it is quite finicky in terms of hardware support and may not be easy to configure. nVidia are also adamant in not supporting it (and have allegedly even updated drivers to detect and prevent it) as they do not want consumer cards used in the cloud.
If trying to switch to linux anyway you may want to give it a go (but I wouldnā€™t switch on the assumption it will work). A search for ā€œPCIe passthrough geforceā€ should find guides for consumer cards. PCIe passthrough is only supported in Windows Server so if youā€™re running a desktop windows then you have to have a linux host and windows guest.

Yes I have researched about PCIe Passtrough before giving up with the Windows path, but itā€™s currently supported only from VMWare and Virtuozzo in the enterprise version that cost way too much for a personal usage. If as you added also Nvidia is blocking the feature on consumer cards itā€™s even more a dead end.
So far Linux is the easier path, already switched all the application apart that one, will switch as soon I discover what other Linux 3d application did provide the features I need (there are only a few of them, itā€™s just very complex and time consuming discover how to achieve a complex feature that requested a substantial research effort on the current 3d app Iā€™m using in Windows)

The VMWare support is I think only in ESXi which is a completely different thing to VMWare Workstation and not what youā€™d want (itā€™s for cloud). PCIe passthrough is supported in KVM and Xen, the two main linux virtualisation options for desktop use, both open-source, and thatā€™s what all the guides for home users are forā€¦
While not officially supported by nVidia it is possible (and I think that their official position is just they donā€™t support it, the blocking claim is I think disputed, but they certainly arenā€™t going out of there way to support it, and Iā€™m pretty sure itā€™s against their license to use consumer cards in cloud situations so no effort from cloud providers). You just need to change an option in the virtualisation config to prevent the detection.

You could, if you can afford it, just set up a second linux box and put one of the GPUs in it. All 3D apps should have a well supported linux renderer even if the app itself isnā€™t as well supported on Linux. Offloading rendering to a server should also be well supported in the app as itā€™s a common use. So you could use the linux box for both DL and rendering.

Not doing rendering but 3d mesh optimization, so need an interactive application with access to 3d hardware acceleration.
As written didnā€™t want to steal system resources virtualizing Windows + windows app considering also working on heavy models that need lot of RAM and VRAM, will go for the full Linux path, just a step away.

Ah cool.
Overhead of virtualisation is generally pretty low (main things being the need for properly para-virtualised drivers to achieve good hardware speeds and a slight overhead on kernel access). With PCIe passthrough it should have basically zero overhead for GPU stuff (it is how every cloud GPU works after all, but then the non-consumer cards are designed for that and drivers expecting it).
But yeah, native linux is obviously easier (though I personally prefer windows as a desktop system remoting into linux). But the vritualisation is perhaps worth a try if you still find yourself wanting the occasional high-demand windows app (for more standard apps Wine or virtualisation without PCIe passthrough should work great).

Can you say more about this? Is it possible to use CUDA in a Linux VM in Windows, with a GTX card? If so, how do you get around the issue that it normally requires a ā€œprofessionalā€ card?

Unfortunately not as from what I can see as PCIe passthrough is only supported in the server version of HyperV, and similarly in ESXi, the server version of VMWare. So, no not with a physical install of Windows Desktop. Best you could do is to install Linux and have a virutalised windows. Or install ESXI or windows server which both have free versions, ESXi without the advanced management of vSphere, and windows in the HyperV host which has none of the big server stuff but can host VMs. Though just installing linux would probably be easier than these options.
While I can conceive Microsoft might enable PCIe passthrough in desktop windows for security/stability reasons at some point, running the whole main system as a VM on a thin hypervisor, I havenā€™t heard of any real moves that way (closest is Windows Sandbox which creates a virtualised copy of your host system). So barring something like that no luck I think.

While not supported by nVidia from what Iā€™ve read it is possible. Some fiddling around with GPU bioses (extracting it from your card to add to Xen/KVM) is needed as they donā€™t distribute a vritualisation ready one for consumer cards but it apparently does work. Thereā€™s not really much needed from the card/driver end, you arenā€™t virtualising the card/driver as in the ā€˜multi-coreā€™ server variants, youā€™re really just virtualising the PCIe bus - hence motherboard compatibility is a big issue, it depends on the vagaries of how the manufacturer set this up in terms of the PCIe hierarchy/grouping.
There are ((or maybe were) some checks for virtualisation in consumer drivers, but settings in the virtualisation software can prevent this (think nVidiaā€™s position was it wasnā€™t an intentional blocking but as they donā€™t officially support virtualisation on consumer cards it wasnā€™t a priority to fix).

1 Like

I encountered following error trying to install fastai on a remote server :

$ conda install -c fastai fastai

Collecting package metadata (repodata.json): failed

UnavailableInvalidChannel: The channel is not accessible or is invalid.
  channel name: fastai
  channel url: https://conda.anaconda.org/fastai
  error code: 403

You will need to adjust your conda configuration to proceed.
Use `conda config --show channels` to view your configuration's current state,
and use `conda config --show-sources` to view config file locations.

conda update works fine.
Any suggestions on how to fix it?
Thank you.

Channel worked fine for me just now. The anaconda servers can occasionally be a little flaky. Retry a couple of times, wait a couple of minutes, retry again. Issues tend to resolve quickly.
Also, check the networking is properly setup on the remote server.

About Windows I think the best option will arrive when Microsoft will enable GPU access on WSL 2 or future release.
https://devblogs.microsoft.com/commandline/wsl-2-post-build-faq/
Its in the pipeline, will wait and see, Windows itā€™s still my primary OS till I will be able to switch.

1 Like

Interesting. Though they may mean something more like the GPU support HyperV currently has without PCIe passthrough which presents a virtualised GPU driver and does not support CUDA. Otherwise I think youā€™d need either specific driver support from nVidia or PCIe passthrough (and the various compatibility issues there). I donā€™t think even linux virtualisation does CUDA without PCIe passthrough (or itā€™s through the specifically designed multi-tenant professional cards).
Maybe though, would be nice.

I am unable to start my Salamander.ai server. I get the message ā€˜We tried starting your server but failed because: ā€œno available serversā€ā€™.

1 Like

Hi,

Iā€™m trying to submit a job to Google Cloud Platform (see AI Platform and Packaging) for a fastai project. It terminates on the installation with error:

ERROR: Package ā€˜trainerā€™ requires a different Python: 2.7.9 not in ā€˜>=3.7ā€™

I know fastai requires python 3, but I canā€™t seem to get python 3 from the setup.py:

from setuptools import find_packages
from setuptools import setup

REQUIRED_PACKAGES = [ā€˜torch>=1.3.0ā€™, ā€˜fastai>=1.0.59ā€™]

setup(
name=ā€˜trainerā€™,
python_requires=ā€™>=3.7ā€™,
version=ā€˜1.0ā€™,
install_requires=REQUIRED_PACKAGES,
packages=find_packages(),
include_package_data=True,
description=ā€˜My training application package.ā€™
)

Thanks

On macos Catalina with fastai==1.0.60 I get the following error:
ImportError: cannot import name 'PILLOW_VERSION' from 'PIL'

I suppose this is related to the latest version of pillow not having PILLOW_VERSION anymore.

Workaround with:

pip uninstall pillow
pip install 'pillow>=6.0.0,<7.0.0'

Related code:

1 Like