Fastai v1 install issues thread

(Peter Veber) #261

Thanks Stas,

You are right. I’m running my notebook from /courses/dl1 so your solution should help.

Thanks a lot for super quick response.

Have a great day

1 Like

(Bobak Farzin) #262

In a PR request, I pulled the latest developer version of fastai the following commands in a clean environment. I then had two problems which I think I solved, but want to post here so others know what to do if they see this.

$conda create --name fai_v1_dev python=3.7
$conda activate fai_v1_dev
$pip install -e ".[dev]"

I do have some pip versions and some base conda versions of the a subset of libraries installed.

When I do this install, I get: Numpy version problem

First, I cannot run the demo tabular.ipynb without throwing an error about
ModuleNotFoundError: No module named 'numpy.core._multiarray_umath'

The resolution is to:

  • Rollback the version of Bottleneck with:
    pip install Bottleneck==1.2.0


  • Upgrade numpy to pre-release version:
    pip install numpy==1.16.0rc1

Both resovle my issue and allow me to compile the tabular.ipynb notebook with no errors.

Show install information:

=== Software === 
python        : 3.7.1
fastai        : 1.0.40.dev0
fastprogress  : 0.1.18
torch         : 1.0.0
nvidia driver : 396.51
torch cuda    : 9.0.176 / is available
torch cudnn   : 7401 / is enabled

=== Hardware === 
nvidia gpus   : 2
torch devices : 2
  - gpu0      : 12194MB | TITAN Xp
  - gpu1      : 12196MB | TITAN Xp

=== Environment === 
platform      : Linux-4.15.0-32-generic-x86_64-with-debian-stretch-sid
distro        : Ubuntu 16.04 Xenial Xerus
conda env     : fai_v1_dev
python        : /home/farzin/anaconda3/envs/fai_v1_dev/bin/python
sys.path      : 

Fri Jan 11 14:32:56 2019    
| NVIDIA-SMI 396.51                 Driver Version: 396.51                    |
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|   0  TITAN Xp            Off  | 00000000:03:00.0 Off |                  N/A |
| 30%   47C    P8    21W / 250W |     12MiB / 12194MiB |      0%      Default |
|   1  TITAN Xp            Off  | 00000000:04:00.0  On |                  N/A |
| 23%   36C    P8    17W / 250W |    979MiB / 12196MiB |      3%      Default |
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|    1      1433      G   /usr/lib/xorg/Xorg                           661MiB |
|    1      2423      G   compiz                                       302MiB |
|    1     22300      G   /usr/lib/firefox/firefox                       3MiB |


Developer chat
(Stas Bekman) #263

I was able to reproduce this problem following your instructions. Mine was installed via conda and had no such problems.

So as you identified pip installs incompatible versions of some of the numpy functionality and bottleneck, conda’s versions of these packages don’t have that problem. The problem happens during pandas’ import.

There is a third workaround:

pip install git+

I filed a bug report:

It looks like a new long overdue release is being tested:, so perhaps let’s wait a little bit and perhaps it’ll get resolved automatically with the new release.

If, however, it gets delayed and/or new numpy is still not released, we will pin pip to bottleneck=1.2.0 for the next fastai release.

As you can see from the bug report, I managed to reduce the problem to:

python -c "import pandas"
ModuleNotFoundError: No module named 'numpy.core._multiarray_umath'
ModuleNotFoundError: No module named 'numpy.core._multiarray_umath'
ModuleNotFoundError: No module named 'numpy.core._multiarray_umath'
ModuleNotFoundError: No module named 'numpy.core._multiarray_umath'
1 Like

(Devon Kaberna) #264

Hi everyone,

Forgive the noob question here, but I am purposely working under 1.0.36.post1. I am using bash-git-prompt. When I am in the fastai folder, I am able to confirm I am using version 1.0.36.post1. When create a new folder for Kaggle competitions within the fastai folder, and then cd into the Kaggle folder, I check the version and now see that I am using 1.0.40.

All of this is done within a virtual environment. Any ideas on how I can make sure I am still under 1.0.36.post1? Apologies if I am supposed to be asking this question under a different thread than this one.


Developer chat
(Stas Bekman) #265

You probably have fastai installed in two places (and perhaps you have a symlink that gets the wrong version).

See this

and certainly follow the instructions here:

Use locate fastai/ if you’re on unix to find all the instances and then you will see if you have it installed twice. You probably have some symlinks pointing to a second install or something similar. Can’t tell without a proper report.

p.s. if you’re working with the git checkout, there was originally also a git cross-over in the 1.0.36 branch (with master HEAD) that has been fixed since then, so if you’re using the 1.0.36 branch, you need to update it.

1 Like

(Stas Bekman) split this topic #266

A post was merged into an existing topic: Misc issues


(Stas Bekman) pinned #267

(Stas Bekman) split this topic #268

2 posts were merged into an existing topic: Misc issues


(Stas Bekman) split this topic #269

4 posts were merged into an existing topic: Performance Improvement Through Faster Software Components


(Masaki Kozuki) #270

Weird dependency error regarding to installation of dataclasses.

No module named 'dataclasses'

This is how I installed fastai in Docker

# Python Anaconda default.
RUN wget -q -O ~/ && \
    /bin/bash ~/ -b -p /opt/conda && \
    rm ~/
# Install PyTorch V1.
ENV PATH /opt/conda/bin:$PATH
RUN conda install -y python=$PYTHON_VERSION && \
    conda install -c conda-forge imbalanced-learn && \
    conda install -y -c conda-forge feather-format && \
    conda install -y -c conda-forge jupyterlab && \
    conda install -y -c conda-forge jupyter_contrib_nbextensions && \
    jupyter contrib nbextension install --system
RUN conda install -y -c pytorch -c fastai fastai==1.0.42
RUN pip install --no-cache-dir -U pip && \
    pip install --no-cache-dir -U dpkt scapy protobuf

Could anyone help me?



I am having trouble upgrading to the latest version of fastai on Windows 10 using conda. I haven’t been able to update since 1.0.38. Running conda update fastai returns “all packages already installed”, even though there have been several new versions.

The only thing I can see is that at version 1.0.39, the build changed from py_1 to 1. Could this be causing the issue? I tried removing fastai and reinstalling, but it just installed 1.0.38 again. Specifying the latest version explicitly doesn’t work either.

Here’s the output of conda search fastai. The new versions are there, but it won’t install.
Name Version Build Channel
fastai 1.0.37 py_1 fastai
fastai 1.0.38 py_1 fastai
fastai 1.0.39 1 fastai
fastai 1.0.40 1 fastai
fastai 1.0.41 1 fastai
fastai 1.0.42 1 fastai

1 Like

(Stas Bekman) #272

Conda is tricky that way - it tells you nothing about the conflicts and then quits, please see:


(Stas Bekman) #273

fastai doesn’t install dataclasses for py37, but it’s needed for py36. Most likely what happened is that you installed fastai with py37, but then some other package downgraded python to py36 and you’re running fastai with py36, while you installed it with py37?

It’d help if you were to follow the guideliness for support:
You’re not saying when you encountered the error. Please put yourself in the shoes of the person that has no idea what you did and then you will know what information to share :wink:

we currently don’t use conda-forge and all package dependencies are tested against the anaconda channel, so it’s possible that something is off in your case, since you’re on the fringe. But again I have no way of telling.

and of course, you can just install dataclasses


(Masaki Kozuki) #274

sorry I missed to clarify the python version. I installed python 3.6 after the conda installation (So what you wrote is not true for my case). Then I install fastai via conda -c pytorch -c fastai fastai.

The error of No module named 'dataclasses' occurred after I ran the cell of from fastai.tabular import *.

I created the environment on nvidia-docker. There, I used anaconda to follow the instruction on the github repo as possible.

1 Like

(Stas Bekman) #275

Thank you for providing these details, @crcrpar. Indeed there was a bug in the conda package setup for py36, should be resolved in fastai-1.0.43 when that is released. Until then, please add conda install dataclasses to your docker build script.

1 Like

(Masaki Kozuki) #276

Thank you.

I’m glad to hear that because I thought the conda would automatically install dataclasses if python version was 3.6 IIRC.

ps. To get familiar with, I decided to migrate to python 3.7.

1 Like

(Stas Bekman) #277

yeah, conda is not very flexible in this situation. pip allows defining a dynamic package dependency which gets sorted out during install. conda only during package build and then it’s set in stone. So the variant is only doable if py36 and py37 packages are built. Since fastai is noarch, this variant unfortunately doesn’t work. But luckily no harm in reinstalling dataclasses for py37, so all is good.

1 Like


I get the following error:
DistributionNotFound: The ‘fastprogress>=0.1.18’ distribution was not found and is required by the application

when running
from import *
or other import operations in an Jupyter Notebook.

It runs fine when running it just in python.

I installed it through pip
fastai 1.0.42
fastprogress 0.1.18

Python version
3.6.3 (default, Sep 2 2018, 00:38:05)

I am running Ubuntu 16.04.

I should also note that this might be related to my pyenv setup, which I am using. However, I have not had any problems with the pyenv-jupyter combination so far.

Best regards



Okay, problem seems to be my pyenv enviroment. Problem is solved (not quite, but the problem is on my side).


(Devon Kaberna) #280


What seems to be best practice (if there even is one) for upgrading to a new version of a Nvidia driver, or a new Pytorch version? I’ve read the documentation on, but couldn’t find guidance. What I’m asking is, do you wait a while, so as to make sure it’s stable, or do you upgrade as soon as a new version is released?