Let’s use this thread to discuss running fastai on your local machine using docker containers.
I just noticed that paperspace has released the latest build of their fastai docker container. I used their container for part 1 of the course and it worked quite well. I’ve been trying to use the seeme.ai container for part 2 as it was “newer” but ran into problems with some initial success (CUDA errors). So I’m going to try the paperspace container on my DL box at home, and see if I have better luck. The idea for me is to avoid dealing with a local install on my own (i.e., “let someone else do the hard work” )
Thanks for sharing Chris, What is the above command in the dockerfile doing?
is it the diffusion_nbs cloned repo that is being copied to the docker image /workspace directory?
Sorry I don’t understand these in detail.
I’m going to try your approach as I’m not having much luck with paperspace/seemai images atm
It just copies every thing in the folder next to the Docker file into that location on the docker container. You probably already know that hehe. I was actually running git clone within the docker container after I got it running. But I think that git clone could just be executed in the Docker file too. I don’t think this line is 100% necessary TBH but I just like including it.
Ah ok thanks! yeah I just ran it in the diffusion_nbs cloned directory and it copied everything over . I asked because I usually just map the cloned directory onto the docker container and use it from inside it. The thing with that is that I’m having a heck of a time docker changing owner/group to root/root for every file it touches so I have to have extra lines in the launcher script to reset the permissions before I fire up the docker container, but your use case is slightly different so it makes sense!
I spent the whole day trying to get different containers working and it turns out docker was changing owner/group perms on my filesystem so nothing was working and I was getting CUDA errors. Time for me to spend those generous credits lambda labs gave out the other day … the amount of fiddling I have to do trying to run it locally on a janky 1070ti is just too much for the returns I get.
Yea I just also just fired up lambdalabs for the first time to try it out.
The docker file and everything above also worked. But I had to add sudo in front of the docker commands to get it working.
This may be WAY more than anyone would want, but I built a docker image based on the jupyter studio image that provided a really quick way to get up and running to learn the material.
So far this is working like a champ on a fresh Ubuntu 22.04 install of Linux. I was able to validate that it was correctly accessing my GPU.