Unofficial Setup thread (Local, AWS)

:frowning:

okay, will look into it, thank you for the heads up.

can i install conda with 3.6 and not 3.7? will the other packages work properly?

I’ve successfully installed CUDA 410 on ubuntu 16.04 :
NVIDIA-SMI 410.48 Driver Version: 410.48

It seems to work properly:

import torch; print(torch.cuda.device_count());

1

import fastai; print(fastai.version)

1.0.6.dev0

2 Likes
python -c 'import torch; print(torch.cuda.device_count()); '

For the above command I am getting count = 0 should I proceed with the installation ? Conda has successfully installed cuda92 and I have my drivers up and running which are nvidia-384.130.
Is it ok to use nvidia-384.120 drivers ? I have nvidia 1050.

Hello Hasib,

If I run Python 3.7 I run into the following error. You should use 3.6 instead.

if cuda: a = to_gpu(a, async=True)
^
SyntaxError: invalid syntax

For more information see this thread.

Hi,

In general I personally prefer to use NEWEST_VERSION -1 of system libraries or drivers as it quite often happen that full support of newest version for different libraries is not available or is in BETA unless specifically you know you need new future included in newest version. I have just created environment
on Ubuntu 16.04 LTS
Cuda=9.0
Anaconda 3.6
Nvidia-driver 384
Torch = 0.4.1

And tried to run dog_VS_cats from fast.ai 1.0.6 and seems to be working without any problems :slight_smile:

Cheers

Michal

3 Likes

I was able to install pytorch-nightly pytorch with Nvidia 384.130 drivers. I had cuda 9.0 preinstalled In my system so I just ran this command conda -c install pytorch pytorch-nightly and resolved the error regarding torch.cuda.device_count() . I found out that specific Nvidia drivers support a specific cuda version, I found this table to be helpful which I found on stack overflow.

CUDA 10.0: 410.48
CUDA  9.2: 396.xx
CUDA  9.1: 390.xx (update)
CUDA  9.0: 384.xx
CUDA  8.0  375.xx (GA2)
CUDA  8.0: 367.4x
CUDA  7.5: 352.xx
CUDA  7.0: 346.xx
CUDA  6.5: 340.xx
CUDA  6.0: 331.xx
CUDA  5.5: 319.xx
CUDA  5.0: 304.xx
CUDA  4.2: 295.41
CUDA  4.1: 285.05.33
CUDA  4.0: 270.41.19
CUDA  3.2: 260.19.26
CUDA  3.1: 256.40
CUDA  3.0: 195.36.15
5 Likes

I ran the commands in this thread with python 3.7, had no issue. But, I had a this async problem while using tensorflow. Found that it is not directly compatible. Made a different virtual environment for it.

Actually to DL you do need a proper machine with a recent GPU(10xx+) else it’s a bit slow and backward compatibility is taken slightly bit by bit…
(I also have a 960MX at my disposal but it sucks…)

1 Like

Hello, will the pip version of fastai.1.x be updated as frequently as that of the git repo? Then perhaps it will be easier to work on Google Colab

I have ubuntu 16 with fastai 0.7 which I will keep in its own separate conda env. I hope later study fastai part 2 v2 which still needs the older fastai after finishing part1 v3

Do you still recommend upgrading to ubuntu 18 if I want to keep using the older fastai too?

I followed all the steps in this guide for a local install on Ubuntu 18.04 and everything went smoothly. All the diagnostic steps provide the correct outputs, as expected. However, when I try to launch the Jupyter Notebook for dogs_cats.ipynb in the fastai examples folder, I get the following error when running the second cell:

NameError: name 'untar_data' is not defined

Not sure why imports aren’t picking up the function above. For the record, I conda installed fastai and the version is 1.0.5 with the Python version being 3.7. When I try manually importing fastai and mucking around, it looks like untar_data isn’t anywhere to be found in the installed lib. Any suggestions?

Try a developer install (see the readme).

1 Like

Thanks @jeremy! Developer install did the trick. I guess the current builds on conda/PyPi have an issue that is causing this problem?

1 Like

Just pushed a new version that’ll work fine.

I already have a Paperspace account with a machine built using their fast.ai template (v0.7). I submitted a Paperspace help request a few days ago to ask them if they would have a v1.0 template anytime soon, but they haven’t responded. Anyone have any extra information? Shall I assume that I’ll just need to create a new machine and install fast.ai v1.0 myself? Thanks.

3 Likes

Hey venkat, don’t be afraid :wink: , I am using exactly your XPS15 laptop and have installed dual boot with no problem. But yes, you will have to disable secure boot functionality. I followed mainly this guide (but did not implement everything like making windows harddrive run as a vm in linux): https://github.com/rcasero/doc/wiki/Ubuntu-linux-on-Dell-XPS-15-(9560).

The other alternative, if you are not afraid of docker is to use that. So if you are planning on continuing to use windows (which I don’t really, even if it’s still there, switching around is a hassle…), that might be the better alternative, but has it’s own “learning overhead”. But docker on windows works quite fine these days…
There are posts in the forums and dockerfiles and docker containers images with fastai you can use out of the box…

Hello community, does fast.ai v1.x.x has support for running on kaggle competitions so that we can make submissions to particular competition ?

Thanks for the encouragement. :slight_smile: How is the battery management with regard to juggling the on-board and dedicated GPUs?