No, just something like
array[:] on a bcolz array.
No, just something like
Please help me set up for the course on Windows.
I’ve managed to get use of a friend’s gaming rig with nVidia GTX 1070, with a fresh Windows 10 and the latest nVidia drivers installed. Today I tried to follow the instructions at
but got stuck. Also, understand that though I have decades of experience developing software, I am nearly completely ignorant about Windows. So please keep it simple.
First I installed Anaconda 2.7 and Cygwin. Following the directions in the setup video (http://course.fast.ai/lessons/aws.html), Anaconda was installed for my user, and Cygwin for all users. Installed wget and git into Cygwin.
All went well until:
User@DESKTOP /cygdrive/c/users/user/anaconda2/lib/site-packages/Theano $ python setup.py develop /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'entry_points' warnings.warn(msg) /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'extras_require' warnings.warn(msg) /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'install_requires' warnings.warn(msg) usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...] or: setup.py --help [cmd1 cmd2 ...] or: setup.py --help-commands or: setup.py cmd --help error: invalid command 'develop'
The same error occurs after the keras step, too.
And just to head off my next obstacle (next instruction), from the Anaconda prompt:
(C:\Users\User\Anaconda2) C:\Users\User>import theano, keras 'import' is not recognized as an internal or external command, operable program or batch file.
Thanks for your help diagnosing this situation.
I have spent some time reading the installation topics and found some posts saying Anaconda and Cygwin must be installed in the opposite order. But no one who was successful laid out the correct installation steps steps completely.
Also, some posts indicate you have to install an old version of VS and CUDA 8 libraries to access the GPU from the notebooks. True? That seems like a deep rabbit hole with its own pitfalls.
Maybe it would be simpler and more reliable to set up a Linux boot partition, and use the Ubantu setup script. Opinions?
Thanks again for help and advice.
If at all possible stick with Linux, Windows is brutal once you get past Part I of the course. Even then there was a lot of quirks that are annoying, but performance-wise it worked. With Part II, most of the stuff isn’t compatible.
How big should my swap partition be for Ubuntu 16.04? I have a 1TB Samsung 960 Pro nvme, 32gb of ram, and a 1080 ti.
Also, since I’ll be installing Windows 10 first then creating the Ubuntu partition, should I physically leave out the graphics card until I’ve fully installed Ubuntu then go back and connect the graphics card and install drivers for both Windows and Ubuntu? Or is it okay to connect the graphics card from the beginning?
nice work! Thanks for the link. Do you know how much more you are paying for the electricity bill monthly if any as a result of running work on your own machine?
Hope this helps : I was informed by Central Computers to take SLI version of motherboard instead of PRO versions. The reason given is, in future if we plan to add another GPU then the job workload will be shared between multiple GPU’s whereas if it is PRO then we can only execute the job on any one of the GPU and workload will not be shared.
Motherboard — MSI Pro Series Intel Z270 DDR4 HDMI USB 3 SLI ATX Motherboard (Z270 SLI)
I have some money left over. From an AMD fx 6300 , I am looking to upgrade my CPU. Which processor would be best for a tight budget( cpu < $270). I am thinking of either an i5 6600 or an i7 6600. Because most of deep learning takes place on the GPU (i have a gtx 1070) , will that hyper threading be of any use ?
Are there other factors that would justify the extra ~$100 for an i7 vs an i5?
I would do 8GB swap, unless your hibernating, then you will need a lot more.
CPU does factor into final epoch speeds, but it is not as significant as the GPU. If you can go a better CPU do it if it will impact the GPU you purchase, don’t. How much else you will do on the box is a big factor as well, if this is just a remote headless box (no monitor) running Linux, you can get away with less. If you are using it interactively, then I would lean towards a better CPU.
Even though GPU is the main component, the CPU still plays a significant role. Either way, one of the most important things you should focus on outside of the GPU is making sure you are on a PCI Express 3.0 bus, especially if you are planning on multiple GPUs.
First off, I believe he is referring to gaming (which requires SLI) to use multiple GPUs. Machine learning doesn’t use SLI and the Pro version most likely can handle multiple GPUs. I would need an exact model number to check but unless it only has one PCI Express slot (highly doubt this) then it can do SLI. SLI is an add-on cable that goes on top of the GPUs that allows them to work together. The motherboard doesn’t take part in this. Again, this is all a moot point as machine learning uses CUDA and CUDA does not and cannot currently use SLI. This is changing, but it will not be SLI it will likely be another connection.
Your main concern is looking at the specs and see how it lists the PCI Express bus, it is usually something like this (8x,8x,4x) which means when using 3 GPUs two will run at 8x and one will run at 4x. 16x is ideal but you can only do this with one GPU unless you run Xeon and the difference between 16x and 8x is near 0. 4x is when you start seeing performance loss on a GPU (and only when using the top tier GPUs like 1080Ti). PCI Express 3 doubles the speed available, so even 4x will run fast enough for most cards.
A simple rule of thumb, with Z170 you have 16 lanes, Z270 you get an extra four. Any NVMe drives you use will take 4 each. The rest are shared with the GPUs in the system. The first two will generally run at 8x, and the last will run at 4x most of the time.
SLI is not a concern in a machine learning box. You won’t even install the SLI connector and you will be able to use all GPUs available (provided you use tensorflow or write code that can use them but most of the time will just allow you to run multiple experiments concurrently).
Thank you! Also, do you have any thoughts on the order of graphic card install and OS install?
EDIT: I’ll defer to dradientgescent here since he’s clearly done this for longer than me.
Install all the hardware first, get the system up and running and tested. Even if video doesn’t work from the start (doubtful) you can just switch the cable to the motherboard until you get the drivers up. But I’ve only seen this with a Hackintosh, Ubuntu and Linux Mint will pick up Pascal cards no problem, but at low resolution, until you install the correct drivers.
This isn’t really a problem with most OS, I haven’t seen it even on Kaby Lake and the 1070. If it is a problem you can easily switch the cable until the driver is installed and then switch it back. Get all the hardware in the system in one go. It’s just easier that way.
As for bios, I highly recommend using AUTO for the integrated card unless you want to specifically disable it (I do this for a specific reason). The reason for using AUTO is so if you have problems and need to run without a GPU it will fire up without doing a CMOS reset.
@sravya8 – Thanks for writing this up!
I’m also trying to figure out if I should build my own DIY DL Rig, like you and Brendan did.
Do you mind posting your PC Parts list / BOM cost for the rig?
I took your advice and installed Ubuntu as a second boot partition. There were many malfunctions during the installation and setup, at least for this naive user. Is navigating these obstacles considered a necessary rite of passage for Linus geeks, or would it be helpful for me to write up a step-by-step for the sake of the innocent?
There’s one significant problem that’s easy to fix. The Ubuntu setup instructions from the Wiki,
link to an obsolete version of install-gpu.sh. I wasted two hours trying to debug this non-functioning script. Would someone please edit in the correct, working script at
The course notebooks now open, however several cells generate warnings or errors, and the Lesson 1 notebook fails to run as a whole. The reason seems to be that install-gpu.sh installed keras 2. Rather than re-write the notebook code, I think it would be simplest to revert to using keras 1. Can anyone here please show me exactly how to do this?
Thanks again for your help.
Thanks for the tip - fixed now.
Hi all. It seems that install-gpu.sh installed keras 2 onto my local Ubantu. Besides deprecation warnings, Lesson 1 gives pile of errors at vgg=Vgg16().
I think I need to revert to keras 1 for the duration of the course. Can anyone explain how to do this? I am a beginner with Linux/Bash/Python.
I think this should do it:
pip install -I keras==1.1.1
Can use forge to do it as well and keep it inside of Conda.
conda install -c conda-forge keras=1.2.2
(or you can still do 1.1.1 if you want).