How to run a FASTAI Model on CPU for predictions which has been trained on GPU?
Just in case if others want’s to try it and also feedback is welcome if I am doing something wrong. it worked for me for using UNET MODEL.
Training:
- Train model on GPU using
fastai
environment
Prediction preparation on CPU:
- For cpu, activate
fastai-cpu
environment. - Find
torch.cuda.is_available()
usages and disable them (see details later) - Load the model which you trained on gpu (
learn.load())
. No retraining required. - Do some predictions on some test images to see it works. It should
- Save the model and its ready for cpu predictions.
Disabling GPU usage.
The problem is that there is no single flag which you can set to force fastai to use CPU, atleast I couldn’t find any because in some case the code uses USE_GPU
flag and in other cases, it uses torch.cuda.is_available().
I replaced all instances of torch.cuda.is_available()
with a single flag USE_GPU
in all relevant places (mainly core). Then I set USE_GPU = False
to enforce using CPU.
To summarize:
- Find all
torch.cuda.is_available()
instances and replace themUSE_GPU
- Set
USE_GPU
totorch.cuda.is_available()
if you want to use GPU or False if you want to use CPU.
For monitoring GPU usage, you could run real-time (every second) analysis with:
watch -n 1 nvidia-smi
I just use defaults.device = 'cpu'
to run on CPU.
Your tip about using the nvidia-smi
tool to verify is a good one. I run that in the “following” format like:
nvidia-smi --query-gpu=timestamp,pstate,temperature.gpu,utilization.gpu,utilization.memory,memory.total,memory.free,memory.used --format=csv -l 1
What about the performance? How many fps do you get?