Unfortunately I can’t give you a complete answer because I’m encountering the same error locally on a fresh install running either CUDA 11.6 or CUDA 11.7 on a new windows machine. I found a similar looking issue raised on Github which is closed. The fix is not entirely satisfying but might help you move forward.
The issue seems to be related to printing torch.Tensors as in this stack overflow post.
For example, in the 00-is-it-a-bird notebook I needed to type: print(f"Probability it's a bird: {probs[0].item():.4f}") to print the result correctly instead of: print(f"Probability it's a bird: {probs[0]:.4f}") as in the course repo.
Also check your version of PyTorch. If it is PyTorch 1.13 then fastai isn’t compatible with this version yet so you’ll need to downgrade to PyTorch 1.12. This resolved the problem for me anyway.
I created a notebook that loads a fine-tuned model which works completely fine in a Colab (Colab link) but throws an error when running it locally.
gradio app.py
AttributeError: Custom classes or functions exported with your `Learner` not available in namespace.\Re-declare/import before loading:
Can't get attribute 'Resampling' on <module 'PIL.Image' from '/usr/lib/python3/dist-packages/PIL/Image.py'>
I’m trying to verify the bear images following the notebook line:
failed = verify_images(fns)
failed
And receive the following error:
C:\anaconda\envs\fastai\lib\site-packages\fastcore\parallel.py in parallel(f=<function verify_image>, items=[Path('bears/black/017e8ca5-161b-42db-b9ee-5f55a...teddy/ff82e6ea-518a-456a-9c08-c1da3e112270.jpg')], n_workers=64, total=None, progress=None, pause=0, method=None, threadpool=False, timeout=None, chunksize=1, *args=(), **kwargs={})
110 if method: kwpool['mp_context'] = get_context(method)
111 pool = ProcessPoolExecutor
--> 112 with pool(n_workers, pause=pause, **kwpool) as ex:
pool = <class 'fastcore.parallel.ProcessPoolExecutor'>
n_workers = 64
pause = 0
kwpool = {}
ex = undefined
113 r = ex.map(f,items, *args, timeout=timeout, chunksize=chunksize, **kwargs)
114 if progress and progress_bar:
C:\anaconda\envs\fastai\lib\site-packages\fastcore\parallel.py in __init__(self=<fastcore.parallel.ProcessPoolExecutor object>, max_workers=64, on_exc=<built-in function print>, pause=0, **kwargs={})
82 self.not_parallel = max_workers==0
83 if self.not_parallel: max_workers=1
---> 84 super().__init__(max_workers, **kwargs)
global super.__init__ = undefined
max_workers = 64
kwargs = {}
85
86 def map(self, f, items, *args, timeout=None, chunksize=1, **kwargs):
C:\anaconda\envs\fastai\lib\concurrent\futures\process.py in __init__(self=<fastcore.parallel.ProcessPoolExecutor object>, max_workers=64, mp_context=None, initializer=None, initargs=())
521 max_workers > _MAX_WINDOWS_WORKERS):
522 raise ValueError(
--> 523 f"max_workers must be <= {_MAX_WINDOWS_WORKERS}")
524
525 self._max_workers = max_workers
ValueError: max_workers must be <= 61
I understand it has to do with the 64 cores of my CPU and searched the forums + Stackoverflow on how to limit the amount of workers without luck.
Any help would be greatly appreciated!
I have some questions from following along with the video for this lesson.
Jeremy shows installing python and mambaforge using a script from fastsetup, then shows installing fastai and nbdev in the same (base) environment. From my very small experience with conda environments, I thought it would be more usual to create a new environment and to install a python version, fastai, jupyterlab, and nbdev in that environment. (In poking around the mamba documentation, I see that it says that only mamba and conda should be installed in the base environment.) Will this work fine either way? Does there need to be a python version in the base environment?
Also, Jeremy mentions installing fastai on a Mac (or Windows or Linux) for doing some things locally as a step in putting an app into production on Hugging Face Spaces. The fastai docs say (on the installation page) that it is not supported on Mac. Is it fine to use it on Mac to run an already trained model? Just not for training a model?
Could the steps shown in that part of the video (I think it’s the section that shows running locally and exporting a python script from a notebook to use in an app) also be done on Kaggle or Colab instead of locally?
Thanks in advance for any help in clearing up my confusion on these points
After installing conda/mamba, it’s okay to install packages in the base. There’s really no need to create an environment to install fastai, jupyterlab, and nbdev. And Python 3.10 will be installed in the base environment.
If you can run fastai on Mac, you can deploy it from it.
I haven’t done this part on Kaggle or Colab, but I think it is possible. You can give it a try.
At various times Jeremy has specifically advised beginners to use only the base environment. My general sense was that multiple environments have potential complications that may distract and discourage you. Follow the YAGNI principle and delay using multiple environments until you can recognize its an imperative to your personal situation. This course was my first exposure to conda and I’ve had no issue so far using only the base environment.
Is it fine to use it on Mac to run an already trained model
Whether its pre-trained or not is not the consideration. Its about access to GPU acceleration. I’ve seen some posts about M1 support “getting there”, but more posts of people having trouble. Hopefully someone else with a Mac can advise.
In any case, regardless of local platform Jeremy recommends using cloud services because a local system can have you focussed more on sysadmin than ML.
Could the steps shown in that part of the video (I think it’s the section that shows running locally and exporting a python script from a notebook to use in an app)
If you don’t know which part of the video, then I can’t guess!
Quick tip, on YouTube you can right click the video to copy a link to a specific time, which makes it easier for readers to advise.
hi,
i just tried to fork jermey’s repo and host it on my own GitHub pages… but it didn’t work as expected
instead, i get a blank page where nothing happens after the upload button. I tried switching his path with my own path. any help appriciated