Don’t know if this is useful for anyone, but I built a docker-image for fastai. I’ve tested it on AWS on one of the small GPU instances
g3s.xlarge. The github repo is found here:
Dockerhub : link
Docker Image based on:
This is intended to be run on machines with GPU(s)
Open to suggestions for linux / python packages to add to the image
Let me know if there’s any issues building or running the image.
Hope it helps,
I have not checked this image out, but note that Nvidia is providing optimized images themselves at https://ngc.nvidia.com
Registration is required but free… @kai was just recently doing a tech talk at the TWiML&AI EMEA meetup.
Just grab the pytorch image from Nvidia and slap fast.ai on top and presto…
Here is the link to the youtube video.
PS: @kai’s jupyter theme is pretty amazing!
Wonderful contribution, @timlee. Will this image (including GPU functionality) work on a windows machine?
I don’t have a lot of experience with windows + docker, especially since i dont know how CUDA works on windows, but in principle it should since its just a docker image. For example, I ran the ubuntu image on a centos machine, and seemed to work ok. Give it a try and let me know, if you have a tutorial you follow post it, and ill try it on my windows machine this weekend.
I wrote a series of articles on Docker folks getting started with it might find useful. I used macOS, but most everything applies on any OS.
This was a fantastic series - kudos for a well written introduction!
Good stuff Tim. I’ll def. try to make use + modify this to my needs.
@jcatanza and @timlee This is not going to work on windows, docker on Windows is just a Linux Virtual Machine. Docker is only a way of isolating processes in a Linux environment. You have to have a Linux Kernel to do that.
There are native Windows docker containers now, but that would have to be completely different dockerfile with Windows installation instructions. Also it’s not possible to use the GPU with docker on Windows.
EDIT: The same applies to macOS of course
Disappointing but informative news. Thanks, @Kai!
It’s just better to do deep learning stuff on Linux I guess. Actually if you’re a novice to Linux don’t bother getting into it for now. Use all the awesome full service providers, focus on deep learning and the rest will evolve automatically!
Yes, don’t worry about wrapping your head around linux. It’s not a necessary requirement. But it’s definitely easier to install dependencies, OSS on Linux (imo).
Or I’d suggest, if you’re really curious to checkout linux, tryout Ubuntu 18.
Initially stick to the GUI, don’t try to learn all about the BASH Scripting world,etc. Ubuntu 18 is pretty good in terms of GUI offerings (as well as DL env setup is much easier now, you can just follow the instructions in the course repo-it’ll work just perfectly)
Soon you’ll realise that GUI is pretty frustrating, and even grabbing your mouse to do things is annoying then to speed up your workflow, you’ll end up learning 1 command day by day and soon becoming comfortable with the linux workflow:
Ex: how do I check my CPU usage instead of searching and opening system monitor which is annoying- look up the command for it.
I don’t want to copy my dataset with my mouse, I want to do it right from the terminal, let’s see how to do it.
Oh okay, I’d always want to see my disk usage when copying something, you’ll end up writing a custom script that copies and gives you a log of disk usage.
And soon you’ll start to understand the Linux memes
This is awesome! Just for reference we (Paperspace) also maintain a docker image here https://github.com/Paperspace/fastai-docker/ in case it’s of any interest
Also, how can I update fastai?
log in, and then git pull, reinstall, then git clone the part3 code.
To install and get ready for FastAI part 2 version3, run the following. (assuming Docker + GPU setup)
$ docker pull chaffix/fastai:stable
Preformatted text`docker run -it -d --rm \
> --runtime=nvidia \
> --shm-size=1g \
> -p 8888:8888 \
> -v $SAVE_DIR:/fastai/save_dir/ \
> --name fastai_gpu_jup_container \
> -e PASSWORD=$PASSWORD \
Copy what this returns… it is the name of the docker image running…
[here]” with what the last thing returned.
docker exec -t -i [here] /bin/bash
you will get
pip install -e ".[dev]"
git clone https://github.com/fastai/fastai_docs.git
It looks like Windows Subsystem for Linux 2 (WSL2) will support the GPU from docker, soon.
I’m trying to install fastai into an NGC pytorch image (PyTorch Release Notes :: NVIDIA Deep Learning Frameworks Documentation), but the installation via
conda install -c fastchan fastai conda results in a second pytorch version being installed into the Docker image (v1.7). The image from NVIDIA already comes pre-installed with v1.9, below is the output of
conda env list before the installation. How can I install fastai without replacing the already installed pytorch version?
(Pip block indented with extra - as multiple spaces weren’t displayed)