it appears that issue was raised in another thread: Jupyter notebook keyerror thread
editing ~/.jupyter/jupyter_notebook_config.py
worked for me
I am having issues installing on Windows 8.1. I have built pytorch but I cannot install torchvision-nightly. The output from
conda install -c fastai torchvision-nightly
is
Solving environment: failed
PackagesNotFoundError: The following packages are not available from current channels:
- torchvision-nightly
- pytorch-nightly
Current channels:
- Win 64 | Anaconda.org
- https://conda.anaconda.org/fastai/noarch
- Win 64 | Anaconda.org
- Noarch | Anaconda.org
- conda-forge/win-64
- conda-forge/noarch
- Win 64 | Anaconda.org
- Noarch | Anaconda.org
- main/win-64
- main/noarch
- Anaconda packages for Windows x86_64 (64-bit)
- Anaconda packages (noarch)
- r/win-64
- r/noarch
- Anaconda extras for Windows x86_64 (64-bit)
- Anaconda extras (noarch)
- msys2/win-64
- msys2/noarch
I have therefore downloaded torchvision from
and installed following their instructions with
python setup.py install
I then can install fastai and run the below without any problems.
from fastai import *
however
from fastai.vision import *
is not found and results in the following error
ModuleNotFoundError: No module named āfastai.visionā
I am a beginner to python so I am not sure what I am missing?
Additionally I get the following output
=== Software ===
python version : 3.6.6
fastai version : 1.0.6
torch version : 1.0.0a0+7edfe11
torch cuda ver : 9.2
torch cuda is : available
torch cudnn ver : 7301
torch cudnn is : enabled=== Hardware ===
torch available : 1
- gpu0 : GeForce GTX 980M
=== Environment ===
platform : Windows-8.1-6.3.9600-SP0
conda env : test_fastai
python : D:\c_progs\Anaconda3\envs\test_fastai\python.exe
sys.path :
D:\c_progs\Anaconda3\envs\test_fastai\python36.zip
D:\c_progs\Anaconda3\envs\test_fastai\DLLs
D:\c_progs\Anaconda3\envs\test_fastai\lib
D:\c_progs\Anaconda3\envs\test_fastai
C:\Users\b8\AppData\Roaming\Python\Python36\site-packages
D:\c_progs\Anaconda3\envs\test_fastai\lib\site-packages
D:\c_progs\Anaconda3\envs\test_fastai\lib\site-packages\torchvision-0.2.1-py3.6.egg
D:\c_progs\Anaconda3\envs\test_fastai\lib\site-packages\win32
D:\c_progs\Anaconda3\envs\test_fastai\lib\site-packages\win32\lib
D:\c_progs\Anaconda3\envs\test_fastai\lib\site-packages\Pythonwin
D:\c_progs\Anaconda3\envs\test_fastai\lib\site-packages\IPython\extensions
from
python -c āimport fastai; fastai.show_install(0)ā
Youāre a brave soul, @cudawarped. I think you might be the first one over here to build that on windows. Itās good to know that it works. Was it a straightforward follow the pytorch docs or did you need to use some know-how. Iām asking since perhaps you could start a fastai v1 windows 8 thread and share your tips there with others.
And yes you did have to build torchvision from source as you figured out. I will update the docs.
I then can install fastai and run the below without any problems.
fastai version : 1.0.6
This tells me you installed the prepackaged fastai (pip or conda)
from fastai import *
however
from fastai.vision import *
is not found and results in the following error
ModuleNotFoundError: No module named āfastai.visionā
do the following work?
import torchvision
testing that torchvision works.from fastai.text import *
testing that other parts of fastai work
update: I added fastai/README.md at master Ā· fastai/fastai Ā· GitHub let me know if you suggest any additions to that section.
Hi, thank you for your response. Yes windows is unfortunate, however thatās the OS I develop for, and I would like to try the dl for coders part one course without having to dual boot my machine or pay to use the cloud.
Regarding pytorch, I compiled with Visual Studio Community 2017 and the steps were exactly as detailed on the website
GitHub - pytorch/pytorch: Tensors and Dynamic neural networks in Python with strong GPU acceleration
I had to install the 14.11 toolset, because it wasnāt installed by default, but this is detailed in the pytorch instructions. I didnāt time the build, but it must have taken at least 3 hours. I tried to use ninja but without any success.
Once installed I had a missing dll error when importing torch but this was fixed with
conda install -c defaults intel-openmp -f
I think my installation may be missing a few components because some of the tests are failing, specifically test\test_torch.py, fails with the below error.
RuntimeError: No CUDA implementation of āgesddā. Install MAGMA and rebuild cutorch (http://icl.cs.utk.edu/magma/) at D:\repos\pytorch\aten\src\thc\generic/THCTensorMathMagma.cu:332
My plan is to address that once I get fastai v1 to āworkā(obviously if torch is slightly broken it canāt work 100%).
I can successfully do
-
import torchvision
however -
from fastai.text import *
gives me
ModuleNotFoundError Traceback (most recent call last)
in
----> 1 from fastai.text import *~\fastai\courses\dl1\fastai\text.py in
----> 1 from .core import *
2 from .learner import *
3 from .lm_rnn import *
4 from torch.utils.data.sampler import Sampler
5 import spacy~\fastai\courses\dl1\fastai\core.py in
----> 1 from .imports import *
2 from .torch_imports import *
3
4 def sum_geom(a,r,n): return an if r==1 else math.ceil(a(1-r**n)/(1-r))
5~\fastai\courses\dl1\fastai\imports.py in
1 from IPython.lib.deepreload import reload as dreload
----> 2 import PIL, os, numpy as np, math, collections, threading, json, bcolz, random, scipy, cv2
3 import pandas as pd, pickle, sys, itertools, string, sys, re, datetime, time, shutil, copy
4 import seaborn as sns, matplotlib
5 import IPython, graphviz, sklearn_pandas, sklearn, warnings, pdbModuleNotFoundError: No module named ābcolzā
Most likely this is something on my end, but on Ubuntu 18.04, Iām installing with pip3:
pip3 install torch_nightly -f https://download.pytorch.org/whl/nightly/cu92/torch_nightly.html
pip3 install fastai
pip3 uninstall fastai
Then I download the fastai library:
git clone https://github.com/fastai/fastai
cd fastai
tools/run-after-git-clone
pip3 install -e .[dev]
The weird thing is that after I install fastai from the repo, pip3 gets broken(!):
$ pip3
Traceback (most recent call last):
File "/usr/bin/pip3", line 9, in <module>
from pip import main
ImportError: cannot import name 'main'
I have actually fully uninstalled and reinstalled pip3 including all of its packages and confirmed that I can reproduce this on my machine. Anyone else seeing this?
Well done !
- I spend a day without succes following the same steps af you. It fails on building caffe2 (one of the registrered issues on pytorch)
- I am on windows 10 using cuda 10 for 1080TI.
- Use d community version of visual studio and activated 14.11 as described on microsoft support pages
OK, we are hitting already enough of platform-specific problems to continue this discussion in a dedicated thread - I suggest that so that others like yourself will have an easier time finding all the windows8-related fastai-v1 issues in one place.
So would you kindly start a new thread, share everything you shared so far including errors (very important as people will search for those). And then reply to this post with the link to where this discussion has moved.
And afterwards see if this helps: "Error: No module named 'bcolz'." but bcolz is already installed.
Of course, most of the windows issues are the same for fastai 0.7 or 1.0 since they mostly have nothing to do with fastai which is just pure python code, but the building blocks which are platform-specific at times, with the only difference that fastai 1.0 has slightly different prerequisites and we have been trying hard to remove any dependencies on problematic packages.
Thank you.
installing dev dependencies triggered an update to your pip.
Google is your friend (well, sometimes) - please always try to search for the error on google first.
This seems to be an issue with pyenv.
But I found a thread that compiles all potential reasons.
This one suggests a conflict between ubuntu pip and manually installed pip.
You are right; in retrospect, this is more of a gripe with whatever python is doing, and that pip can just be overwritten even when Iām not using sudo. I thought it might be due to something in the fastai install, but I suppose not. My apologies. To be fair, I did read all of those prior to posting here, and thought that it was odd that only the final fastai installation step (from the git repo) nuked my pip3, but if this is the norm, so be it.
No damage done, itās often hard to tell what triggered the problem. I personally just have a habit to instantly paste the error into google and often that gets resolved instantly.
Moreover, we do want to hear about dependency issues like what you have reported, so that we could document them to make the installation process as pain-free as possible. It could go into http://docs-dev.fast.ai/troubleshoot.html.
Therefore, once you find a solution to your problem, please post it here in reply to your message.
Thank you, @jamesp
I do see what youāre talking about in the installation log:
Collecting pip>=18.1 (from fastai==1.0.7.dev0)
Using cached https://files.pythonhosted.org/packages/c2/d7/90f34cb0d83a6c5631cf71dfe64cc1054598c843a92b400e55675cc2ac37/pip-18.1-py2.py3-none-any.whl
I will see if I can figure out how to safely upgrade to 18.1+ before installing fastai. Alternatively, if fastai doesnāt really require 18.1+, that might be useful.
And from setup.py:
dev_requirements = { 'dev' : to_list("""
distro
jupyter_contrib_nbextensions
pip>=18.1
pipreqs>=0.4.9
pytest
wheel>=0.30.0
""") }
So, itās defined as a requirement in the dev build. If thatās really a requirement, Iāll see what I can figure out about the Ubuntu 18 mess. Thanks!
Itās unlikely we will downgrade this requirement. Itās a foundational tool and older versions have their own problems. If it has a problem it needs to be fixed and not worked around it. If your situation is not covered by the 3 links I posted earlier, open a new issue with pip on that same site the links are on.
And yes, isolating the upgrade from fastai is a very smart approach.
Iām on ubuntu 18 and I donāt have any issue with it.
Perhaps switch to using a conda env? This is what I use. Somethink like: http://docs-dev.fast.ai/release#run-install-tests-in-a-fresh-environment
I think itās an important recommendation because you donāt want to mess with system-wide packages. Earlier I did and broke something things. This time around I donāt touch system-wide python at all.
I will add this recommendation to docs.
I think you are exactly right. In my case, itās probably less about fastai messing with my system, as it is my system having messed up settings, causing fastaiās installation to reveal the brokenness. I say this because when I follow the docs on a fresh vm without any modification, I have no issues. Thanks for your pointers re: conda.
Have you tried, with CUDA 9.2?
Just another gentle nudge to send you guys into creating a windows thread and relocate that discussion there if you will it will help others.
I have just upgraged to V1 on a paper specs machine and ran into cuda not available problems. In my case I got it working with just the latter part of these instructions, that is:
sudo apt-get āpurge remove nvidia-387
sudo apt-get -f install
sudo reboot now
After this it was working ok and after changing the symlinks I was able to use the old library workbooks also
Many thanks
Hi, I am not sure if my current issue is with windows or the version of fastai I am using, can you confirm that the installed version 1.0.6 is compiled from commit 62091ed651fb8a07587fd6e3da805415bb6fd8e0?
I ask because I am getting an error with the data loader. If I run through the dogs_cats notebook
%reload_ext autoreload
%autoreload 2
from fastai import *
from fastai.vision import *
path = untar_data(URLs.DOGS)
data = ImageDataBunch.from_folder(path, ds_tfms=get_transforms(), tfms=imagenet_norm, size=224)
and then call
next(iter(data.train_dl))
I get the following error
PicklingError Traceback (most recent call last)
in
----> 1 next(iter(data.train_dl))d:\ssdbackup\dev\repos\fastai_v1\fastai\data.py in iter(self)
50 def iter(self):
51 āProcess and returns items fromDataLoader
.ā
ā> 52 for b in self.dl: yield self.proc_batch(b)
53
54 def one_batch(self)->Collection[Tensor]:D:\c_progs\Anaconda3\envs\test_fastai\lib\site-packages\torch\utils\data\dataloader.py in iter(self)
817
818 def iter(self):
ā 819 return _DataLoaderIter(self)
820
821 def len(self):D:\c_progs\Anaconda3\envs\test_fastai\lib\site-packages\torch\utils\data\dataloader.py in init(self, loader)
558 # before it starts, and del tries to join but will get:
559 # AssertionError: can only join a started process.
ā 560 w.start()
561 self.index_queues.append(index_queue)
562 self.workers.append(w)D:\c_progs\Anaconda3\envs\test_fastai\lib\multiprocessing\process.py in start(self)
103 ādaemonic processes are not allowed to have childrenā
104 _cleanup()
ā 105 self._popen = self._Popen(self)
106 self._sentinel = self._popen.sentinel
107 # Avoid a refcycle if the target function holds an indirectD:\c_progs\Anaconda3\envs\test_fastai\lib\multiprocessing\context.py in _Popen(process_obj)
221 @staticmethod
222 def _Popen(process_obj):
ā 223 return _default_context.get_context().Process._Popen(process_obj)
224
225 class DefaultContext(BaseContext):D:\c_progs\Anaconda3\envs\test_fastai\lib\multiprocessing\context.py in _Popen(process_obj)
320 def _Popen(process_obj):
321 from .popen_spawn_win32 import Popen
ā 322 return Popen(process_obj)
323
324 class SpawnContext(BaseContext):D:\c_progs\Anaconda3\envs\test_fastai\lib\multiprocessing\popen_spawn_win32.py in init(self, process_obj)
63 try:
64 reduction.dump(prep_data, to_child)
ā> 65 reduction.dump(process_obj, to_child)
66 finally:
67 set_spawning_popen(None)D:\c_progs\Anaconda3\envs\test_fastai\lib\multiprocessing\reduction.py in dump(obj, file, protocol)
58 def dump(obj, file, protocol=None):
59 āāāReplacement for pickle.dump() using ForkingPickler.āāā
ā> 60 ForkingPickler(file, protocol).dump(obj)
61
62 #PicklingError: Canāt pickle <function crop_pad at 0x000000A0A8516378>: itās not the same object as fastai.vision.transform.crop_pad
I am posting here because I donāt know if this is an windows install error or if it is a bug in the commit I have used to install from?
This is specific to Windows and multiprocessing. If you put num_workers=0
in your DataBunch, you should be fine.