I have been linking a folder on the server to the docker container using -v. However your dockerfile creates a new user “docker” rather than sticking with the default root. This user does not have the correct permissions on the host folder…I am sure resolvable but why not just use the root user in the container? All the other container images I have tried use the root user.
The decision to not use the root user was based on a lot of popular images also not using it. For example, keras-docker uses keras. Regardless, I was able to mount and edit a host volume using the image. Mind sharing the command and the environment (OS, Docker version) you’re using?
I used the amazon linux ami.
Docker version 1.12.6, build 7392c3b/1.12.6
I ran the container using this:
volumes = "-v /v1/.jupyter:/docker/.jupyter “
”-v /v1:/host"
fab.run(“docker run {volumes} -w=/host -p 8888:8888 -d -i “
”–name notebook deeprig/fastai-course-1”.format(**locals()))
fab.run(“docker exec -d notebook jupyter notebook”)
This asks for the password then gives a 404 error. If I docker exec into the container then do “ls -a” it says “permission denied”. If I “sudo ls -a” then it works.
In the second fab command (docker exec -d notebook jupyter notebook) is there a reason you’re running the notebook again? docker run in the previous command should spawn a notebook on port 8888, so the second command just tries to do it again. When you exec into the containers, are you doing it with exec -it /bin/bash?
As a workaround for now, I would add the option “-u root” to docker run. This will override the user in the image and run the notebook as root.
@anurag, after executing the second instruction in your first post I get a problem with Jupyther not having any kernel. Any ideas on how to fix?
bash:
"[I 05:39:29.301 NotebookApp] Writing notebook server cookie secret to /home/docker/.local/share/jupyter/runtime/notebook_cookie_secret
[I 05:39:29.552 NotebookApp] Serving notebooks from local directory: /home/docker/fastai-courses/deeplearning1/nbs
[I 05:39:29.552 NotebookApp] 0 active kernels
[I 05:39:29.554 NotebookApp] The Jupyter Notebook is running at: http://0.0.0.0:8888/
[I 05:39:29.554 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[I 05:40:55.280 NotebookApp] 302 GET / (172.17.0.1) 5.69ms
[I 05:40:55.307 NotebookApp] 302 GET /tree? (172.17.0.1) 9.64ms
[I 05:41:01.839 NotebookApp] 302 POST /login?next=%2Ftree%3F (172.17.0.1) 6.07ms
[E 05:41:02.888 NotebookApp] Failed to load kernel spec: ‘python2’"
I’m running Arch Linux and use this Docker image (thanks for that!) on my local machine, which works flawlessly until vgg16.h5 is downloaded. If I then stop the container and try to start it again, I always get the following error (exit status 1):
Traceback (most recent call last):
File "/opt/conda/bin/jupyter-notebook", line 6, in <module>
sys.exit(notebook.notebookapp.main())
File "/opt/conda/lib/python2.7/site-packages/jupyter_core/application.py", line 267, in launch_instance
return super(JupyterApp, cls).launch_instance(argv=argv, **kwargs)
File "/opt/conda/lib/python2.7/site-packages/traitlets/config/application.py", line 657, in launch_instance
app.initialize(argv)
File "<decorator-gen-7>", line 2, in initialize
File "/opt/conda/lib/python2.7/site-packages/traitlets/config/application.py", line 87, in catch_config_error
return method(app, *args, **kwargs)
File "/opt/conda/lib/python2.7/site-packages/notebook/notebookapp.py", line 1290, in initialize
super(NotebookApp, self).initialize(argv)
File "<decorator-gen-6>", line 2, in initialize
File "/opt/conda/lib/python2.7/site-packages/traitlets/config/application.py", line 87, in catch_config_error
return method(app, *args, **kwargs)
File "/opt/conda/lib/python2.7/site-packages/jupyter_core/application.py", line 243, in initialize
self.migrate_config()
File "/opt/conda/lib/python2.7/site-packages/jupyter_core/application.py", line 169, in migrate_config
migrate()
File "/opt/conda/lib/python2.7/site-packages/jupyter_core/migrate.py", line 241, in migrate
with open(os.path.join(env['jupyter_config'], 'migrated'), 'w') as f:
IOError: [Errno 13] Permission denied: u'/home/docker/.jupyter/migrated'
I already tried googling but to no avail.
Did anyone else have such a problem, or am I doing something wrong with Docker?
I really don’t know what to do anymore. I have the data under /Users/obssd/data, but running docker run -it -p 8888:8888 -v /Users/obssd/data:/home/docker/data deeprig/fastai-course-1
doesn’t work and gives that error that I showed you
I’ll try running this command on a Windows machine and see how it goes there.
Regarding your GPU question, I suppose that yes you do need a gpu if you try the whole dataset and not the sample only.
Did you change the path in your notebook to point to /home/docker/data as well? I used this docker image and I was getting this error but I was able to run the sample set after I changed my notebook.
Hey Anurag, the docker container worked perfectly on my local box with a 1050ti (which I was having all kinds of problems trying to get it to work using native drivers etc.) So kudos for this great work!
I also signed up for Crestle account and when I ran the Crestle notbook with dogscats full data set, it took about 645 seconds. When I ran it on my nvidia-docker container on 1050ti … it took about 620 seconds. I was expecting the Crestle GPU backed notebook to run much much faster… since it uses a dedicated Tesla K80? Any ideas on what I maybe doing wrong with the Crestle notebook GPU enablement?
Actually, I did not. I turned off the GPU notebook. Does this happen every time a notebook is GPU enabled? Because my workflow might be to work with samples with a cpu backend and then restart Jupyter with GPU when I want to do a full data run. I’ll try it again and note the timings etc.
OK, just a quick update, I did multiple back to back runs with different batch sizes for the dogs cats data set and it seems it is slightly better than a 1050ti local gpu but it stayed constant around 570-580s for these runs regardless of batch sizes. I did not restart the notebook in between the runs i.e., they were all one after another