Platform: Docker (Free; non-beginner)

Looks good Florian, and good work, I have also previously release fastai v1 and now v2, started pack for deploying vision models via docker, I guess difference is I have avoided nginx route completely, which may or may not be great solution depending upon use case. Here is V2 post => Deployment ready template for creating responsive web app for Fastai2 Vision models

2 Likes

Sorry for the late reply, I missed the notification on this.

If you pass the --gpus all flag when you start the container, it will have access to your GPU inside the container. I have updated the Docker hub documentation to reflect that.

That being said, you will need a NVidia GPU to use PyTorch/fast.ai.

I have to run the following commands to get the course notebook to work with the official docker containers:

  1. Open a terminal in jupyter lab from the main menu (not in your notebook, where you select the notebook) by clicking new on the right and then selecting terminal
  2. Install graphviz binaries: apt-get install graphviz
  3. Change directory to course-v4 or fastbook cloned git repository, in my case I used the volume in my previous post: cd coursenbs/fastbook
  4. Use pip to install requirements: pip install -r requirements.txt
  5. Restart any running notebook and make sure notebooks you open are trusted

Note: You have to do this every time you relaunch the docker container!

@hamelsmu adding you in the thread, you might be able to fix :slight_smile: :grinning:

He’s really great. After I wrote my first post, he created a separate docker file for the course with the notebooks already cloned into them.

I keep meaning to write a docker compose file for it, instead of using the run command, but I haven’t had time.

2 Likes

It’s a great community of people indeed!

With respect to the repo already cloned, I prefer to keep files you will likely edit outside of the container and add them via a volume.

All your changes will persist even if you restart the container. I usually just mount my home-folder on my AI machine so I have all my projects…

I set this up months ago … https://github.com/miramar-labs/fastai-devbox

1 Like

I see you do that too :grinning::grinning:

As a suggestion, you can add the fastai and pytorch cache folder as a volume, you you don’t need to redownload pretrained model weights for example…

Hello all ( @chusc @zerotosingularity) we have created official Dockerfiles for your development environment which you can see on the README here:

This includes very detailed instructions on how to use with a GPU and how to start a notebook etc, along with some utlities baked in. @jeremy and myself are in the process of refactoring the Docker containers and currently have 2 different DockerHub orgs one called fastdotai and another called fastai. For right now, you can use what is listed in the README as the other stuff we are using for CI. At some point we will consolidate everything a bit more and refresh the README when it changes, but these should be ready to use and working!

A docker-compose file could be a good idea. We already have a docker-compose file in most repos already but that is to support GitHub codespaces on CPU, we could try to find a way to orchestrate the Makefile so that you could run Docker-compose on GPUs and have everything just work (this would also be helpful for me). Tagging @jeremy as well for visibility, incase he has an opinion.

I agree that Docker compose files make it more tractable to use Docker as your development environment so you don’t have to mess with all the mount commands and can start multiple services concurrently (notebook, docs site, etc. )

Thanks for the feedback! Those are a great resource and glad they are available/officially supported.

With respect tot @mlabs’ question, you might need to look at graphviz needed to be installed with apt-get?

On a topic level, although I have added a reference to the official fastai docker containers, it might be a good idea to convert this topic, or create a new topic specifically for “Official Docker images” or something like that, which might be less confusing for people looking for fastai+docker guidance.

What do you think?

@hamelsmu I looked at your docker run examples on github/fastai/docker-containers and want to add that pytorch github docs suggest using a --shm-size parameter.

I found using shm-size and ulimit memlock=-1 were important when running fast-ai v1, I think I was running the paperspace fastai v1 container. Obviously people use docker containers in many different environments, but for desktop dev and run or compose, shm-size around 1/2 your ram and hopefully greater than your vram and memlock=-1 are a good idea.

I think docker dev is going to be the best way to run this on your desktop shortly, because Windows 10 WSL2 GPU/CUDA docker support is in fast ring beta. Soon you’ll be able to run this on any windows gaming machine with an nvidia graphics card by just installing a wsl2 distro and docker and running a command.