Fastai v1 install issues thread

(Sylvain Le Groux) #83

it appears that issue was raised in another thread: Jupyter notebook keyerror thread
editing ~/.jupyter/ worked for me


I am having issues installing on Windows 8.1. I have built pytorch but I cannot install torchvision-nightly. The output from

conda install -c fastai torchvision-nightly


Solving environment: failed

PackagesNotFoundError: The following packages are not available from current channels:

  • torchvision-nightly
  • pytorch-nightly

Current channels:

I have therefore downloaded torchvision from

and installed following their instructions with

python install

I then can install fastai and run the below without any problems.

from fastai import *


from import *

is not found and results in the following error

ModuleNotFoundError: No module named ‘’

I am a beginner to python so I am not sure what I am missing?

Additionally I get the following output

=== Software ===
python version : 3.6.6
fastai version : 1.0.6
torch version : 1.0.0a0+7edfe11
torch cuda ver : 9.2
torch cuda is : available
torch cudnn ver : 7301
torch cudnn is : enabled

=== Hardware ===
torch available : 1

  • gpu0 : GeForce GTX 980M

=== Environment ===
platform : Windows-8.1-6.3.9600-SP0
conda env : test_fastai
python : D:\c_progs\Anaconda3\envs\test_fastai\python.exe
sys.path :


python -c “import fastai; fastai.show_install(0)”

(Stas Bekman) #85

You’re a brave soul, @cudawarped. I think you might be the first one over here to build that on windows. It’s good to know that it works. Was it a straightforward follow the pytorch docs or did you need to use some know-how. I’m asking since perhaps you could start a fastai v1 windows 8 thread and share your tips there with others.

And yes you did have to build torchvision from source as you figured out. I will update the docs.

I then can install fastai and run the below without any problems.
fastai version : 1.0.6

This tells me you installed the prepackaged fastai (pip or conda)

from fastai import *
from import *
is not found and results in the following error
ModuleNotFoundError: No module named ‘’

do the following work?

  1. import torchvision
    testing that torchvision works.
  2. from fastai.text import *
    testing that other parts of fastai work

update: I added let me know if you suggest any additions to that section.


Hi, thank you for your response. Yes windows is unfortunate, however that’s the OS I develop for, and I would like to try the dl for coders part one course without having to dual boot my machine or pay to use the cloud.

Regarding pytorch, I compiled with Visual Studio Community 2017 and the steps were exactly as detailed on the website

I had to install the 14.11 toolset, because it wasn’t installed by default, but this is detailed in the pytorch instructions. I didn’t time the build, but it must have taken at least 3 hours. I tried to use ninja but without any success.
Once installed I had a missing dll error when importing torch but this was fixed with

conda install -c defaults intel-openmp -f

I think my installation may be missing a few components because some of the tests are failing, specifically test\, fails with the below error.

RuntimeError: No CUDA implementation of ‘gesdd’. Install MAGMA and rebuild cutorch ( at D:\repos\pytorch\aten\src\thc\generic/

My plan is to address that once I get fastai v1 to “work”(obviously if torch is slightly broken it can’t work 100%).

I can successfully do

  1. import torchvision

  2. from fastai.text import *
    gives me

ModuleNotFoundError Traceback (most recent call last)
----> 1 from fastai.text import *

~\fastai\courses\dl1\fastai\ in
----> 1 from .core import *
2 from .learner import *
3 from .lm_rnn import *
4 from import Sampler
5 import spacy

~\fastai\courses\dl1\fastai\ in
----> 1 from .imports import *
2 from .torch_imports import *
4 def sum_geom(a,r,n): return an if r==1 else math.ceil(a(1-r**n)/(1-r))

~\fastai\courses\dl1\fastai\ in
1 from IPython.lib.deepreload import reload as dreload
----> 2 import PIL, os, numpy as np, math, collections, threading, json, bcolz, random, scipy, cv2
3 import pandas as pd, pickle, sys, itertools, string, sys, re, datetime, time, shutil, copy
4 import seaborn as sns, matplotlib
5 import IPython, graphviz, sklearn_pandas, sklearn, warnings, pdb

ModuleNotFoundError: No module named ‘bcolz’


Most likely this is something on my end, but on Ubuntu 18.04, I’m installing with pip3:

pip3 install torch_nightly -f
pip3 install fastai
pip3 uninstall fastai

Then I download the fastai library:

git clone
cd fastai
pip3 install -e .[dev]

The weird thing is that after I install fastai from the repo, pip3 gets broken(!):

$ pip3
Traceback (most recent call last):
  File "/usr/bin/pip3", line 9, in <module>
    from pip import main
ImportError: cannot import name 'main'

I have actually fully uninstalled and reinstalled pip3 including all of its packages and confirmed that I can reproduce this on my machine. Anyone else seeing this?

(Kaspar Lund) #88

Well done !

  • I spend a day without succes following the same steps af you. It fails on building caffe2 (one of the registrered issues on pytorch)
  • I am on windows 10 using cuda 10 for 1080TI.
  • Use d community version of visual studio and activated 14.11 as described on microsoft support pages

(Stas Bekman) #89

OK, we are hitting already enough of platform-specific problems to continue this discussion in a dedicated thread - I suggest that so that others like yourself will have an easier time finding all the windows8-related fastai-v1 issues in one place.

So would you kindly start a new thread, share everything you shared so far including errors (very important as people will search for those). And then reply to this post with the link to where this discussion has moved.

And afterwards see if this helps: "Error: No module named 'bcolz'." but bcolz is already installed.

Of course, most of the windows issues are the same for fastai 0.7 or 1.0 since they mostly have nothing to do with fastai which is just pure python code, but the building blocks which are platform-specific at times, with the only difference that fastai 1.0 has slightly different prerequisites and we have been trying hard to remove any dependencies on problematic packages.

Thank you.

(Stas Bekman) #90

installing dev dependencies triggered an update to your pip.

Google is your friend (well, sometimes) - please always try to search for the error on google first.

This seems to be an issue with pyenv.

But I found a thread that compiles all potential reasons.

This one suggests a conflict between ubuntu pip and manually installed pip.


You are right; in retrospect, this is more of a gripe with whatever python is doing, and that pip can just be overwritten even when I’m not using sudo. I thought it might be due to something in the fastai install, but I suppose not. My apologies. To be fair, I did read all of those prior to posting here, and thought that it was odd that only the final fastai installation step (from the git repo) nuked my pip3, but if this is the norm, so be it.

(Stas Bekman) #92

No damage done, it’s often hard to tell what triggered the problem. I personally just have a habit to instantly paste the error into google and often that gets resolved instantly.

Moreover, we do want to hear about dependency issues like what you have reported, so that we could document them to make the installation process as pain-free as possible. It could go into

Therefore, once you find a solution to your problem, please post it here in reply to your message.

Thank you, @jamesp


I do see what you’re talking about in the installation log:

Collecting pip>=18.1 (from fastai==1.0.7.dev0)
  Using cached

I will see if I can figure out how to safely upgrade to 18.1+ before installing fastai. Alternatively, if fastai doesn’t really require 18.1+, that might be useful.

And from

dev_requirements = { 'dev' : to_list("""
""") }

So, it’s defined as a requirement in the dev build. If that’s really a requirement, I’ll see what I can figure out about the Ubuntu 18 mess. Thanks!

(Stas Bekman) #94

It’s unlikely we will downgrade this requirement. It’s a foundational tool and older versions have their own problems. If it has a problem it needs to be fixed and not worked around it. If your situation is not covered by the 3 links I posted earlier, open a new issue with pip on that same site the links are on.

And yes, isolating the upgrade from fastai is a very smart approach.

(Stas Bekman) #95

I’m on ubuntu 18 and I don’t have any issue with it.

Perhaps switch to using a conda env? This is what I use. Somethink like:

I think it’s an important recommendation because you don’t want to mess with system-wide packages. Earlier I did and broke something things. This time around I don’t touch system-wide python at all.

I will add this recommendation to docs.


I think you are exactly right. In my case, it’s probably less about fastai messing with my system, as it is my system having messed up settings, causing fastai’s installation to reveal the brokenness. I say this because when I follow the docs on a fresh vm without any modification, I have no issues. Thanks for your pointers re: conda.


Have you tried, with CUDA 9.2?

(Stas Bekman) #98

Just another gentle nudge to send you guys into creating a windows thread and relocate that discussion there if you will :wink: it will help others.

(John Richmond) #99

I have just upgraged to V1 on a paper specs machine and ran into cuda not available problems. In my case I got it working with just the latter part of these instructions, that is:
sudo apt-get —purge remove nvidia-387
sudo apt-get -f install
sudo reboot now

After this it was working ok and after changing the symlinks I was able to use the old library workbooks also

Many thanks

(Stas Bekman) #100

fyi, conda now has spacy-2.0.16, so it should be good now.


Hi, I am not sure if my current issue is with windows or the version of fastai I am using, can you confirm that the installed version 1.0.6 is compiled from commit 62091ed651fb8a07587fd6e3da805415bb6fd8e0?

I ask because I am getting an error with the data loader. If I run through the dogs_cats notebook

%reload_ext autoreload
%autoreload 2
from fastai import *
from import *
path = untar_data(URLs.DOGS)
data = ImageDataBunch.from_folder(path, ds_tfms=get_transforms(), tfms=imagenet_norm, size=224)

and then call


I get the following error

PicklingError Traceback (most recent call last)
----> 1 next(iter(data.train_dl))

d:\ssdbackup\dev\repos\fastai_v1\fastai\ in iter(self)
50 def iter(self):
51 “Process and returns items from DataLoader.”
—> 52 for b in self.dl: yield self.proc_batch(b)
54 def one_batch(self)->Collection[Tensor]:

D:\c_progs\Anaconda3\envs\test_fastai\lib\site-packages\torch\utils\data\ in iter(self)
818 def iter(self):
–> 819 return _DataLoaderIter(self)
821 def len(self):

D:\c_progs\Anaconda3\envs\test_fastai\lib\site-packages\torch\utils\data\ in init(self, loader)
558 # before it starts, and del tries to join but will get:
559 # AssertionError: can only join a started process.
–> 560 w.start()
561 self.index_queues.append(index_queue)
562 self.workers.append(w)

D:\c_progs\Anaconda3\envs\test_fastai\lib\multiprocessing\ in start(self)
103 ‘daemonic processes are not allowed to have children’
104 _cleanup()
–> 105 self._popen = self._Popen(self)
106 self._sentinel = self._popen.sentinel
107 # Avoid a refcycle if the target function holds an indirect

D:\c_progs\Anaconda3\envs\test_fastai\lib\multiprocessing\ in _Popen(process_obj)
221 @staticmethod
222 def _Popen(process_obj):
–> 223 return _default_context.get_context().Process._Popen(process_obj)
225 class DefaultContext(BaseContext):

D:\c_progs\Anaconda3\envs\test_fastai\lib\multiprocessing\ in _Popen(process_obj)
320 def _Popen(process_obj):
321 from .popen_spawn_win32 import Popen
–> 322 return Popen(process_obj)
324 class SpawnContext(BaseContext):

D:\c_progs\Anaconda3\envs\test_fastai\lib\multiprocessing\ in init(self, process_obj)
63 try:
64 reduction.dump(prep_data, to_child)
—> 65 reduction.dump(process_obj, to_child)
66 finally:
67 set_spawning_popen(None)

D:\c_progs\Anaconda3\envs\test_fastai\lib\multiprocessing\ in dump(obj, file, protocol)
58 def dump(obj, file, protocol=None):
59 ‘’‘Replacement for pickle.dump() using ForkingPickler.’’’
—> 60 ForkingPickler(file, protocol).dump(obj)
62 #

PicklingError: Can’t pickle <function crop_pad at 0x000000A0A8516378>: it’s not the same object as

I am posting here because I don’t know if this is an windows install error or if it is a bug in the commit I have used to install from?


This is specific to Windows and multiprocessing. If you put num_workers=0 in your DataBunch, you should be fine.