For those who run their own AI box, or want to

Yes. Install Ubuntu 22.04 (latest/current)

2 Likes

For others who were stuck like me this works…but you might get an error when you open Ubuntu for first time and ask you to go to site: `Manual installation steps for older versions of WSL | Microsoft Learn

All you have to do is go to your powershell again with administrative account and then type wsl.exe --update

This works then !

SO ANNOYING !!.

My only question is can you deploy your model without downloading fastai setup on your local machine? I have been trying so much with small steps.

  1. First I was having problem downloading Ubuntu for so long. This is done now.
  2. Once this is done…apparently a simple command of cd …/fastsetup/ isnt working

I really dont want to spend so much time on this. Anyone who has explained the steps of Jeremy to install fastsetup and jupyter notebook on your local machine? Thank you

Hey everyone, I have a Dell laptop with an Nvidia GeForce 1050 gpu (only 4G memory).
I read somewhere that 8G is the minimum and of course for the next phase I’ll probably use cloud providers, but for now I have a couple of questions:

  1. what can I expect if I run the FastAI course on this GPU? doesn’t work at all or it takes a lot of time?
  2. does anyone use Arch linux (or derivatives) and wants to share his setup?
    Thanks alot!

PS: I managed to set it up myself with hints from this thread. If someone’s interested, I’ll post a doc with all the steps later.

These are the steps I did:

  • install WSL
  • install mambaforge in WSL
  • install fastbook with mamba/conda
  • install jupyter and jupyterlab with mamba/conda

If you are using WSL2, then you shouldn’t access files from the Windows partition. See Comparing WSL Versions | Microsoft Learn for more info.

Hello all,

I’ve just updated to Ubuntu 20 and have attempted to recreate the fastai environment from scratch. I have a the GTX 1070 locally and use miniconda to install packages. All is well after installing PyTorch and fastai. Jupyter reports torch.cuda.is_available() and conda lists PyTorch as using CUDA.

Next, I install opencv. After its installation, 'conda list pytorch' shows

pytorch                   2.0.1           cpu_py311h53e38e9_0  
pytorch-cuda              11.7                 h778d358_5    pytorch
pytorch-mutex             1.0                        cuda    pytorch

PyTorch has reverted to using the CPU. Uninstalling opencv does not switch it back to CUDA.

Can anyone explain what is going on?

opencv shows version 4.6.0. I have tried conda install -c fastchan opencv and conda install -c conda-forge opencv and conda install opencv-python.

How can I get PyTorch back to using CUDA, and how can opencv and PyTorch (CUDA) exist together?

Thanks for your help!

Hello again. I seem to have solved the problem, though I do not understand how.

conda uninstall opencv
pip install opencv-python
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia --force-reinstall

It may have something to do with using pip instead of conda. After the reinstall of PyTorch etc, conda list immediately showed the CUDA PyTorch version. A restart of Ubuntu was needed before torch.cuda.is_available() returned True.

Mysteries.

1 Like

Here is my local setup steps. Hope it helps if anyone faces the same issue.

I’m running windows 11 with RTX3070 using WSL2 for my laptop. steps as below:

  1. follow 1. NVIDIA GPU Accelerated Computing on WSL 2 — wsl-user-guide 12.2 documentation where I installed the latest nvidia driver on my windows
  2. run nvidia-smi on windows to test the nvidia driver works
  3. run wsl --install to install wsl2
  4. restart my windows
  5. run nvidia-smi again on ubuntu to test the nvidia driver works
  6. followed the link https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=WSL-Ubuntu&target_version=2.0&target_type=deb_local to install the cuda toolkit
  7. run mamba install -c fastchan fastai to install fastai on ubuntu. but get some error for downloading <https://conda.anaconda.org/fastchan/noarch/platformdirs-3.10.0-pyhd8ed1ab_0.conda>. I followed this link to solve the issue python - Fastai installation via Conda fails with a 404 error - Stack Overflow
  8. run pip install jupyter notebook to install jupyter
  9. run lesson 2 in book and faced some issues:

for code, seems ! [ -e /content ] does not work well

! [ -e /content ] && pip install -Uqq fastbook

it throws error below

ModuleNotFoundError                       Traceback (most recent call last)
Cell In[2], line 3
      1 #hide
      2 get_ipython().system(' [ -e /content ] && pip install -Uqq fastbook')
----> 3 import fastbook
      4 fastbook.setup_book()

ModuleNotFoundError: No module named 'fastbook'

switched to following code fixed the issue

if not os.path.exists("/content"):
    print("The directory '/content' not exists.")
    !pip install -Uqq fastbook  

for code in lesson2: results.attrgot('contentUrl') will retrieve a None type array

if not path.exists():
    path.mkdir()
    for o in bear_types:
        dest = (path/o)
        dest.mkdir(exist_ok=True)
        results = search_images_ddg(f'{o} bear')
        ...
        download_images(dest, urls=results.attrgot('contentUrl'))

it throws exception below

File ~/mambaforge/lib/python3.10/concurrent/futures/thread.py:58, in _WorkItem.run(self)
     55     return
     57 try:
---> 58     result = self.fn(*self.args, **self.kwargs)
     59 except BaseException as exc:
     60     self.future.set_exception(exc)

File ~/mambaforge/lib/python3.10/site-packages/fastcore/parallel.py:46, in _call(lock, pause, n, g, item)
     44     finally:
     45         if l: lock.release()
---> 46 return g(item)

File ~/mambaforge/lib/python3.10/site-packages/fastai/vision/utils.py:29, in _download_image_inner(dest, inp, timeout, preserve_filename)
     27 def _download_image_inner(dest, inp, timeout=4, preserve_filename=False):
     28     i,url = inp
---> 29     url = url.split("?")[0]
     30     url_path = Path(url)
     31     suffix = url_path.suffix if url_path.suffix else '.jpg'

AttributeError: 'NoneType' object has no attribute 'split'

pass result directly to ims without extracting the attribute fixed the issue

4 Likes

yiqi:thank you,
download_images(dest, urls=results)

This may be WAY more than anyone would want, but I built a docker image based on the jupyter studio image that provided a really quick way to get up and running to learn the material.

So far this is working like a champ on a fresh Ubuntu 22.04 install of Linux. I was able to validate that it was correctly accessing my GPU.

Note that the image is beefy at about 11GB.

1 Like

Hello,
I’m trying to run fastai on my computer, because Kaggle and Gcollab are quite slow (e.g., finding the first learning rate in the 1 tutorial of fastai doc takes around half an hour; training the model just after almost 2 hrs!).
I have MacOS 13.3.1 (so, after 12.3) and an Intel Iris Plus Graphics 1536 Mo
I must say I’m quite confused on a number of things but I guess just running things locally without any GPU optimization would be fine.
But currently, as soon as I try to using an fastai API, I get an error related to MacOS. For instance, if I write

dls = ImageDataLoaders.from_name_func(path, files, label_func, item_tfms=Resize(224))

I get:

RuntimeError: The MPS backend is supported on MacOS 12.3+.

I’ve tried to put device = torch.device('cpu') but this doesn’t remove the error.
I was wondering if someone could help me making things work. In particular, a conda environment file would be great!
Best,
Pierre

Did you select GPU on Kaggle? That slow speed sounds like it was running CPU.

1 Like

No, I had completely missed that! Thanks a lot Allen!

Still interested on how to make things run on my computer though

@przem8k wrote a guide for Apple install. Mac setup for fastai and pytorch with GPU support in pure pip (no conda) | pnote.eu

1 Like

Thanks Allen. I followed the steps and got the following error message in your script:
“MPS not available because the current MacOS version is not 12.3+ and/or you do not have an MPS-enabled device on this machine.”

Since my OS is 13.3.1, I guess it’s because Intel Iris Plus Graphics doesn’t support MPS. Is there any good fix for that (except buying another GPU :slight_smile: )?

Unfortunately, I think your hardware isn’t compatible. Luckiliy, you can achieve a lot with online services (free and paid), no need to rush for new hardware until that is viable for you.

1 Like

Folks, I’m selling my A6000. Contact me if you are interested.

I tried a few of the solutions suggested in the forums but I was unsuccessful in getting the notebook to work. I tried to deconstruct the code into smaller elements to try and nail down what the problem was. Eventually, I was able to get the images saved from the DDG search by splitting each of the bear types into seperate cells as per below

grizzly_dest = Path('bears/grizzly')
results = search_images_ddg('grizzly bear')
download_images(grizzly_dest, urls=results)
grizzly_fns = get_image_files(grizzly_dest)
grizzly_fns

black_dest = Path('bears/black')
results = search_images_ddg('black bear')
download_images(black_dest, urls=results)
black_fns = get_image_files(black_dest)
black_fns

teddy_dest = Path('bears/teddy')
results = search_images_ddg('teddy bear')
download_images(teddy_dest, urls=results)
teddy_fns = get_image_files(teddy_dest)
teddy_fns

This worked for me

I’m looking for someone to help me understand the basics of AI. I am involved with Bart and I’m trying to put together a sci-fi novel, but I don’t want to become a full-fledged coder myself. If anyone is interested, please give me a call back. Thanks, DiRT

Hello,

I have a Legion with nvidia GeForce RTX 3060. Processor is amd ryzen 7 5800 with radeon graphics and I have 16gb of ram. I think the nvidia is only used for specific applications and not for notebook or this forum page (for which I guess it uses amd radeon), so it looks like it fulfills the criteria in @balnazzar 's post, though I’m not sure about the ram vs vram thing.

I’ve got the 2nd lesson notebook working in Jupyter remotely now, but it’s going very slowly when running learner methods and, checking the task manager, it’s using only CPU and no GPU. I guess something like WSL2 is needed to force it to use GPU?

(sorry for the double-post- couldn’t edit my last post for some reason)