I’m working on an Azure server, and having trouble getting Jupyter Lab installed (I prefer it to Jupyter Notebook). I can install it and it works with my (base) environment activated, but if activate the (fastai) environment then I can’t install it. I figured maybe my fastai environment was out of date, so I tried re-installing fastai, but get the following error:
Collecting package metadata (repodata.json): done
Solving environment: /
The environment is inconsistent, please check the package plan carefully
The following packages are causing the inconsistency:
Now I’m really stumped. Does anyone have suggestions? I find Python environments confusing.
Conda can be a lttle weird with inconsistencies. You can sometimes get more information by explicitly specifying the troublesome dependencies, i.e. try
conda install fastai spacy jupyterlab. It treats explcit packages and dpendencies a bit different.
Or you don’t actually have to have jupyterlab in the environment you work from. I have it in it’s own environment. You just install ipykernel into the e.g. fastai environment and it appears in jupyterlab. You can also run
python -m ipykernel install to specify a name (useful if you have
.venv folders in projects).
Thanks Tom. I didn’t realize that you can run Jupyter Lab in a different environment, which solves my main problem. I guess each of the notebooks could be running a different kernel in a different environment (right?). I see that ipykernel already exists in both my base environment and my fastai environment, but I don’t quite understand how the FastAI course notebooks seem to ‘know’ that they should use the (fastai) environment, despite me having launched Jupyter Lab in (base).
Also, although the fastai environment seems to be installed correctly, I am still at a loss for why I can’t add the jupyterlab package to it. I am getting a raft of errors suggesting that I remove an archive (which I tried) plus about 50 more files…which I haven’t done yet. I’ll let sleeping dogs lie for now, but just in case you have some insight, here is a truncated version of the output:
The following NEW packages will be INSTALLED:
The following packages will be UPDATED:
ca-certificates pkgs/main::ca-certificates-2019.8.28-0 --> conda-forge::ca-certificates-2019.9.11-hecc5488_0
The following packages will be SUPERSEDED by a higher-priority channel:
certifi pkgs/main --> conda-forge
openssl pkgs/main::openssl-1.1.1d-h7b6447c_2 --> conda-forge::openssl-1.1.1c-h516909a_0
WARNING conda.gateways.disk.delete:unlink_or_rename_to_trash(140): Could not remove or rename /data/anaconda/pkgs/certifi-2019.9.11-py36_0/lib/python3.6/site-packages/certifi-2019.9.11-py3.6.egg-info/not-zip-safe. Please remove this file manually (you may need to reboot to free file handles)
WARNING conda.gateways.disk.delete:unlink_or_rename_to_trash(140): Could not remove or rename /data/anaconda/pkgs/certifi-2019.9.11-py36_0/lib/python3.6/site-packages/certifi-2019.9.11-py3.6.egg-info/PKG-INFO. Please remove this file manually (you may need to reboot to free file handles)
WARNING conda.gateways.disk.delete:unlink_or_rename_to_trash(140): Could not remove or rename /data/anaconda/pkgs/certifi-2019.9.11-py36_0/lib/python3.6/site-packages/certifi-2019.9.11-py3.6.egg-info/SOURCES.txt. Please remove this file manually (you may need to reboot to free file handles)
WARNING conda.gateways.disk.delete:unlink_or_rename_to_trash(140): Could not remove or rename /data/anaconda/pkgs/certifi-2019.9.11-py36_0/lib/python3.6/site-packages/certifi-2019.9.11-py3.6.egg-info/installed-files.txt. Please remove this file manually (you may need to reboot to free file handles)
CondaMultiError: InvalidArchiveError(u'Error with archive /data/anaconda/pkgs/certifi-2019.9.11-py36_0.tar.bz2. You probably need to delete and re-download or re-create this file. Message from libarchive was:\n\nCould not unlink (errno=13, retcode=-25, archive_p=94574961684784)',)
You should be able to select the Kernel to use when you launch, and there’s a menu to change it up on the top-right:
That errors not one I’ve not seen. Looks like maybe your package cache got corrupted. You can delete package archives with
conda clean -t. I would note that you can sometimes have issues mixing packages from the conda-forge channel and the main channel (mostly things with binary components not pure python). It also leads to that superseeded thing a lot. When you specify a channel on the command line it actually overrides the default channel. If you do
conda install -c defaults -c conda-forge then it will keep main as higher priority which prevents some of the constant moving between channels.
I have recently been installing my environments solely from conda-forge with apparently good success (I did just have to reinstall one but I think I stuffed it up as I accidentally let pip pull in some core things). If you run
conda config --help then you can see the commands to prepend channels and alter order (or just run
conda config --show-sources and edit the config file). You can also have per-environment channel settings (
conda config --env). So if you just add conda-forge before defaults before installing packages then you can get a whole enviropnment from conda-forge.
Thank you, those insights are extremely useful. I’ve also suspected that mixing channels was causing problems before, and it’s nice to have some options to work with. I didn’t realize that conda-forge was complete enough to build a whole environment from. Good to know. I really appreciate your help.
Yeah, I’m pretty sure Anaconda now uses conda-forge as an upstream for a lot of channels. Though I think pulling the recipes not the built packages, I believe they diverge slightly in the compiler versions used (both coming from the same base CentOS toolchain but different versions).
Sorry to bug you but I just ran into a related question. The first notebook (00_exports.ipynb) has a line in it that only works in Python 3.
Jeremy makes a system call from inside the Jupyter notebook to invoke the file:
!python notebook2script.py 00_exports_jp.ipynb
…but on my machine, the system call defaults to system python (2.7), even though the notebook kernel itself is running python 3.6, so the line throws an error. Do you know if there is a way to invoke a particular python environment with a system call like that (maybe it’s a matter of setting a path variable?)
I can run it from the command-line as a fallback, but I think it gets used a lot and it would be nice to be able to run it directly from the notebook.
You could try changing it to
%run notebook2script.py ..., compared to
!<shell> command that should run under the notebook kernel. But in general, without changing code, I don’t have a solution.
This probably also depends on how you launch jupyterlab. I believe in my setup such a thing would run in the jupyterlab environment. Which does cause issues still as dependencies may not be there but at least avoids the system python where I have nothing installed apart from what packages have pulled in (though at least Arch linux uses system python3).
If you are running Anaconda you can create an environment called e.g
fastai and specify that it shall use python3. Then, before you run jupyter, run the command:
%run notebook2script.py... works brilliantly.
Thanks Joseph; that’s what I was initially trying to do but I have an as-yet unsolved installation problem that was preventing me from doing it (see above).
Does anyone happen to know if it’s possible to have two separate Jupyter lab sessions running in tabs in your browser? I have two cloud VMs and I can access either one separately with Jupyter lab, but not at the same time.
If one VM is running, I usually get some version of “Can’t access localhost:8888” when I try to boot up Jupyterlab on the second one. Also, a little message that says
channel_setup_fwd_listener_tcpip: cannot listen to port: 8888 pops up when I SSH into the second machine, but I’m not sure where that is coming from. I tried messing around with ports and IP addresses, but no success yet.
Oh my gosh, I finally figured this out. I posted the answer here. My dream has come true!