Make fastai v0.7 actually usable for beginners -> removing installation obstacles

I know everyone is working on v1 and that is the hot topic :wink:
But at the same time Machine Learning for Coders was officially released and promoted and so now more new people are trying to get working fastai environments setup and lots of them are are having trouble.

Also the stated goal of is to “make neural nets uncool again”, but right now you kind of have to be an expert to actually get the library set up, which is kind of counterproductive to this goal.

The fastai package can only be installed via pip, not conda, but the environment has to be set up via conda (so always both are needed, mixing package managers is not really elegant)

And v0.7 will be the workhorse for anyone going through the 3 MOOCs until the v1-based course will be released next year. So lets make it painless to install!

So, my goal would be to

  • remove installation obstacles/packages that are troublesome
  • actually make it pip-installable (it isn’t really unless you have a predefined working environment set up, preferably via conda)
  • make it more Windows friendly
  • make it work without conda env (especially in corporate environments outside of data science related areas, people may have python installed, but don’t have anconda and can’t get it without painful IT requests)

I wanted to do a quick but systematic test of what the problem was on windows after seeing so many problem threads, so I misused my wifes PC for that. That lead to basically a full day of digging into ever deeper holes even on linux. There are a number of issues I have uncovered.

So I would like to address this step by step. I have started on that journey, but I would like to get some feedback, whether anybody actually cares about this and my pull requests would have a chance or whether I would be wasting my time. My first 2 proposals, more to follow:

Step 1: Remove bcolz dependency

  1. Lots of errors and issues for windows and linux users, pip and conda users alike and errors regarding bcolz are plenty in the forums (and in related blogs on medium). -> high impact
  2. It is absolutely non-essential for DL students because only currently used for optionally storing precomputed activations which is a feature that did not even make it into v1 (see Jeremy’s post Planning to get rid of `precompute` in fastai v1. Comments welcome)
  3. it is completely irrelevant for ML students.


  • Remove from setup dependencies, remove from global import list and import only in necessary places, change precompute setting to false if not installed

I am not saying it is not a great module to use or to throw it out completely, but the idea would be to make the fastai library resilient against it not being there and to remove the necessity to install it. (For most people it cannot be installed using pip because it needs compiling, which for noobs is a hassle on linux and for windows people an even bigger hurdle.)

Step 2 - remove dependencies that requries compilation/build

  • removing packages that are not needed immediately or can be replaced with compiled alternatives make it especially more windows friendly (but more noob friendly in general, even on linux compiling is not straight forward and requires several system packages to be apt-installed)
  • The then missing dependencies could be moved to a separate requirements.txt file so a complete environment setup can be achieved later, but missing packages are not a “taking first steps” impediment.


  • spacy: remove it from the dependencies list. It is only necessary when using NLP related topics that come up in DL1 from lesson 4 onwards and ML from lesson8. It will need to be installed at some point. But for people just starting out it is a major hassle (no wheels, not pip installable without compiling )
  • pytorch: remove the <0.4 requirement (i.e. replace by <0.4.2, it has been made to work with 0.4x and on windows no binaries are available for 0.3.x via pip, so this alone throws every windows user off.

Yes I am aware, some of these issues get resolved if “you just use anaconda, they have precompiled binaries” and no one likes windows, but if we want widespread usage these issues need to be solved and I think they could be. :wink:

Of course even if the pull requests get accepted it will be no good until someone with the rights for that creates and uploads a new dist to PyPI?!

Here is my first pull-request, but feedback and discussion here very welcome!


pip is not required (or recommended or supported) for the course. Just creating the conda env is enough - except if using a Windows machine, which we strongly advise against for beginners. For better support for Windows users the main step seems to be ensuring that pytorch 0.4 works correctly, so that we can bump the version number in environment.yml (it may already work - just needs someone to check the notebooks), and creating a ‘’ or explaining what platforms are supported and how to use them, and how to do an unsupported Windows installation if the user wishes.

1 Like

Okay, understood. Re: pytorch 0.4 I thaught that had happend already based on this post from May, but I will start going through the notebooks again. Unofficial pytorch 0.4 support
The bcolz issues are present also with conda and on Ubuntu though.

1 Like

What are you referring to there? If I just create the conda env from the repo on Ubuntu, everything seems to work OK for me - am I missing something?

Well if you search for bcolz on the forums you will find many import error reports, even if you discount the ones that are due to pip. Even though it is in the environment.yaml, sometimes it doesn‘t get installed. Sometimes it does but cannot be loaded,sometimes it has to be uninstalled and installed together with different numpy versions, there are non systematic errors here. And I have had that problem myself on ubuntu and conda. I have the feeling it also depends on the order of installation and so sometimes works if you manually install it after the environment setup. I cannot reliably reproduce the issues. This is one of the many threads regarding bcolz errors:

Some also experience problems after git pull/ conda env updates while it was working before, again there seem to be specific numpy version requirements or incompatibilities.

These reports go back to 2017 and include paperspace setups. And yes, sometimes it is just user error, such as not activating the env etc.
But just by statistical occurance, this seems to be the module with the most installation error reports (probably followed by cv2…)

hmm, maybe one simple step would be to actually freeze the versions in the environment files to exact ones we know to be working (together), i.e. your environment for v0.7. v1 has to go with the time, but I guess for v.0.7 that would be okay?! So we could have seperate environment files in the old/ folder for v0.7?! might stabilize things a little.

Yup aware of all the threads, of course. But I still do not think any of them used the instructions and supported platform. It’s common because it requires a c compiler if you don’t use conda, like cv2, so it pops up more than other issues. I do think we should report a better error message in this case (I mentioned this, with some specific suggestions, in the PR you kindly sent us).

If there is a reproducible example of using the environment file on a reasonably modern and common linux with the right GPU drivers installed, where they didn’t previously try to install in some other way or have an existing environment with that name, then that’s a bug that we do need to fix.

1 Like

Sorry, didn’t see the message on the PR before my last replies.

The hardest part of getting started with the excellent courses was getting an appropriate environment set up. I’d say this is not just an issue for beginners. I started hand assembling 1802 assembly code in 8th grade (back in 1976 or so) and am well-versed in using and installing various Unix/Linux packages. Despite all of the attempts to make this easier through package managers and installers, it can still be a struggle due to all the dependencies among packages and various hardware platforms. Now add the need to set up a VM (for most folks) and connect to it and you add to the complexity. The only way I can see to make this easier might be through a remote VM with material that you can remote desktop into. Right now, as far as I can tell, Paperspace only offers using a command line only version of Ubuntu. I’ve been doing work on reinforcement learning where it helps to be able to display the gym environments and run code outside of Jupyter. For this work I’ve not used my Paperspace VM, because I haven’t yet walked through how to set it up (or whether I can set it up) to use my local computer as a display.

If you just want to run the notebooks, Paperspace’s Gradient option seems like the easiest approach.

The other side to this is that if someone really wants to work in this area, he or she is going to need to learn to work through these issues in addition to learning the science and practical machine skills.


FYI Paperspace also offers a full X desktop in the browser, for those that want it.