Hi @mdmanurung , I ended up doing something similar to what @balnazzar has recommended except I used a container published by papserspace. This container comes preinstalled with pytorch/fastai/fastbook but the fastai version is 2.6.0 so I upgraded it to ‘latest’ which currently is 2.6.3.
You’ll need to install:
docker
nvidia container toolkit
paperspace container
And you’ll need to play around with configuration a bit to map your fastbook folder inside your container (this is what I’ve done /devel/fastbook – which I git cloned from its repo – is mapped as /notebooks inside my container.)
This can be a faster way if you’re ok with docker containers. I’m no expert but I learned enough about it to get one going because it saves me the trouble of trying to install it directly on my linux machine which I always had issues with.
P.S. I downloaed their April 25 rc2 image but they have newer images available (which I’m assuming have the latest version of fastai etc.)
Thank you for your suggestion. For future reference, what additional information would be helpful to include? I am a beginner so I am not sure what to provide other than what I wrote.
This is not a beginner question - setting up on your own workstation is an advanced topic. So I’ve moved this to the thread set up for this advanced discussion.
I was giving that piece of advice in order to stay in line with conda’s docs:
When you begin using conda, you already have a default environment named base . You don’t want to put programs into your base environment, though. Create separate environments to keep your programs isolated from each other.
I don’t exactly know why they do recommend that, but one of the reasons could be that base’s /bin is always in the search path (so that you can call conda and other basic stuff from any env) and if one installs other programs into base, it will be hard to distinguish which one is being called (tip for beginners: use which).
One more issue could be that different pieces of software would bring with themselves different versions of the same pkg, etc…
I tried to do some experimentation with Shark, and it’s still rough. It requires torch-mlir, which in turn requires a nightly build of torch, currently at version 1.12. However, it also requires functorch, which requires PyTorch 1.11. The in-development version of functorch (0.2.0) will be aligned with version 1.12 of PyTorch, but using the main branch of functorch did not work for me.
I’m sure we could find a previous version of torch-mlir that works with PyTorch 1.11, and then Shark should work fine. However, model training requires some adaptation, and model inference does too. I have no idea if those adaptations could be automated for all the models in the fastai library, but this chart makes me doubt it. Also, note there are still no mentions of Metal compatibility in the shark-metal column.
My goal was to try to set up an environment in my M1 laptop for fastai learning and experimentation, which would be very convenient. But this still looks like something that would require a lot of effort and manual tinkering, completely defeating the purpose. Fortunately I have a Linux box that I can access from anywhere, so that’s what I’ll keep doing until things mature a bit.
I have a hunch that PyTorch support could be announced during WWDC in June. If so, it would be amazing if it supported both the GPU and the Neural Engine. In my experience, the Neural Engine is much faster than the GPU; at least for inference.
Just out of curiosity, what does your sys.path look like in jupyter? And since it seems you’ll probably be throwing this install away anyway, are you able to do a “!pip install torch” from inside the jupyter notebook which cannot find your installed modules?
I find it odd that your sys.path inside Jupyter doesn’t seem to know anything about your conda env.
If you look at the path variable inside your WSL2 install, you may have a situation where you’re getting to the python 3.8 env before you ever get to the conda/python3.9 env ??
if that’s the case, switching those two around might help (or just get rid of any jupyter install in your base wsl2 ) it seems you have python 3.8 installed in WSL2 and then in conda env you have 3.9 installed.
What I find weird is that conda is supposed to isolate you from your local env so when you activate your conda env it should show you the python 3.9 verison but it doesn’t.
I think conda’s docs are not so appropriate for beginners. I recommend using a single base env for beginners. And for most experts, unless you really need isolated environments for different projects.
I would strongly recommend starting again, without this step. Environments really are not a great option for beginners, and make debugging issues like this far harder.
Delete any miniconda/mambaforge/etc directories you have in your home directory. Make sure all mention of conda stuff is removed from your bashc rc. Close all your terminals. Then open a new terminal, make sure that jupyter doesn’t work (if it does, then you’ve still got an old version somewhere), and then try again.
The output of that shows that you’ve accidentally installed jupyter into your system python, and when you ran jupyter you did so when conda wasn’t activated. After you’ve removed all your conda stuff as mentioned above, and closed and reopened your terminals, you’ll need to delete jupyter (before you reinstall conda/mambaforge):
I was activating the environment before running jupyter, but activating it or not which jupyter returned the same (system python) location.
So yes, jupyter was installed in the system python, and that’s the one that got opened - I think I did this accidental install some time ago and not when installing fastai.
So, even though by the time I saw your message I had already installed fastai again (in its own environment), running python3 -m pip uninstall -y jupyter jupyter_core jupyter-client jupyter-console jupyterlab_pygments notebook qtconsole nbconvert nbformat jupyterlab-widgets nbclient did the job!
It’s strange, nonetheless. Indeed, if you call jupyter from inside a conda env, its jupyter executable takes priority over other jupyters installed elsewhere.
Anyway, do as Jeremy suggested. If you use the whole conda machinery for fastai only, you have no use for multiple envs.
And that for such an use case one can employ the base env without causing messes is new to me too, and it’s ueful to know (e.g. for containerized stuff, etc…).