Train model on GPU and run it on CPU for predictions

How to run a FASTAI Model on CPU for predictions which has been trained on GPU?

1 Like

Just in case if others want’s to try it and also feedback is welcome if I am doing something wrong. it worked for me for using UNET MODEL.

Training:

  1. Train model on GPU using fastai environment

Prediction preparation on CPU:

  1. For cpu, activatefastai-cpu environment.
  2. Find torch.cuda.is_available() usages and disable them (see details later)
  3. Load the model which you trained on gpu (learn.load()). No retraining required.
  4. Do some predictions on some test images to see it works. It should :slight_smile:
  5. Save the model and its ready for cpu predictions.

Disabling GPU usage.
The problem is that there is no single flag which you can set to force fastai to use CPU, atleast I couldn’t find any because in some case the code uses USE_GPU flag and in other cases, it uses torch.cuda.is_available(). I replaced all instances of torch.cuda.is_available() with a single flag USE_GPU in all relevant places (mainly core). Then I set USE_GPU = False to enforce using CPU.

To summarize:

  1. Find all torch.cuda.is_available() instances and replace them USE_GPU
  2. Set USE_GPU to torch.cuda.is_available() if you want to use GPU or False if you want to use CPU.

For monitoring GPU usage, you could run real-time (every second) analysis with:
watch -n 1 nvidia-smi

2 Likes

I just use defaults.device = 'cpu' to run on CPU.

Your tip about using the nvidia-smi tool to verify is a good one. I run that in the “following” format like:

nvidia-smi --query-gpu=timestamp,pstate,temperature.gpu,utilization.gpu,utilization.memory,memory.total,memory.free,memory.used --format=csv -l 1

What about the performance? How many fps do you get?