Hi muellerzr hope all is well!
I built an image classifier model on colab about five minutes ago, I use the starter code for render.com as my baseline.
When trying to deploy the app locally i get the following error.
raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use **torch.load with map_location=torch.device('cpu')** to map your storages to the CPU.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "app/server.py", line 52, in <module>
learn = loop.run_until_complete(asyncio.gather(*tasks))
File "/opt/anaconda3/lib/python3.7/asyncio/base_events.py", line 579, in run_until_complete
File "app/server.py", line 45, in setup_learner
This model was trained with an old version of fastai and will not work in a CPU environment.
Please update the fastai library in your training environment and export your model again.
I thought I saw a link where you had created a starter repo for fastai2 but haven’t been able to find it again.
My local machine does not have a GPU but I have at least 70 different classifers which run no problem using fastai1-v3 on my local machine.
What do I need to change or configure to resolve this error?
Many thanks for your help mrfabulous1