GPU Optimizations Central

Awesome. Thanks @stas (also for ipyexperiments)!

1 Like

What would be a good workaround (current version of fastai) to avoid issues like kernel death or buffer truncation while running a language model learner training (n epochs)? The model uses 28m tokens and 30k vocab. I am using @piotr.czapla 's ULMFiT work but the bottleneck is in the fit_one_cycle. Since the first few epochs are completed successfully, should gpu memory be reclaimed between epochs? I am not that expert in coding.

I’m still working through the vision class and have been getting seriously sidetracked by the need to develop new tools to support this kind of investigation (see the new ipygpulogger ).

But @Kaspar is working on the text classes already, join his efforts here: Optimising class LanguageModelLoader()

Since the first few epochs are completed successfully, should gpu memory be reclaimed between epochs? I am not that expert in coding.

It should be the case. Do you observe the same “leak” if you run several 1 epoch fit_one_cycle? I know it’d impact the results, but at this moment we are just talking about the memory usage. Use ipygpulogger , to make it easy to trace memory.

It’d be handy to instrument fit_one_cycle to report memory usage after each epoch.

Has there been any significant changes in the core code running the model recently? Because earlier when I ran this language model code, a single epoch ran in over 2 hours and there was 96% GPU utilization most of the time. This was about 20 days ago. However, when I run the same code now, I can barely get 70% GPU utilization and the same code for one epoch takes over 12 hours.

You’re making a good point that we should probably have some timing tests to catch any potential regressions in speed. Otherwise there is no telling.

It sounds that perhaps your transforms struggle at feeding the gpu fast enough? check your cpu/ram utilization/availability?

You guys should put your efforts together over at this thread: Optimising class LanguageModelLoader()

Here’s an experiment I ran on colab (not sure if ipyexperiments take into account how colab allocates gpu RAM):

*** Experiment started with the Pytorch backend
Device: ID 0, Tesla K80 (11.2 GB RAM)

*** Current state:
RAM:   Used      Free     Total    Util
CPU:   1.7 GB   10.9 GB  12.7 GB  15.76% 
GPU: 327.0 MB   10.9 GB  11.2 GB   2.94% 
import pretrain_lm
expm = pretrain_lm.LMHyperParams(dataset_path='/content/data/ar28/', 
                                base_lm_path=None, bidir=True, 
                                qrnn=False, tokenizer='v', max_vocab=32000, 
                                emb_sz=400, nh=1150, nl=3, clip=0.20, 
                                bptt=64, lang='ar', name='Arabic')
learn = expm.train_lm(num_epochs=1, bs=64, drop_mult=0.3, lr=5e-3)
[crashes with cuda OOM error, ran successfully with bs = 32]
*** Experiment finished in 00:01:17 (elapsed wallclock time)

*** Local variables:
Deleted: expm, pretrain_lm

*** Experiment memory:
RAM:  Consumed     Reclaimed
CPU:   1.6 GB    0.0 B (  0.00%)
GPU:  10.1 GB   1.4 GB ( 14.33%)

*** Current state:
RAM:   Used      Free     Total    Util
CPU:   3.3 GB   10.4 GB  12.7 GB  32.02% 
GPU:   9.0 GB    2.2 GB  11.2 GB 410.81% 

The corpus size is around 28m tokens. Is it feasible that 10 gb were consumed and could not run the cell? Or maybe the experiment is not reading colab’s allocation policies correctly? What’s an approximate gpu memory cost for this process? I think 10gb is too much.
Edit: I ran the same test on Kaggle and here are the results (cuda OOM for same parameters above.

*** Experiment started with the Pytorch backend
Device: ID 0, Tesla K80 (11.2 GB RAM)

*** Current state:
RAM:   Used      Free     Total    Util
CPU:   1.8 GB   13.2 GB  15.7 GB  13.30% 
GPU: 327.0 MB   10.9 GB  11.2 GB   2.94% 
[OOM process]
*** Experiment finished in 00:02:10 (elapsed wallclock time)

*** Local variables:
Deleted: expm, pretrain_lm

*** Experiment memory:
RAM:  Consumed     Reclaimed
CPU:   3.2 GB    0.0 B (  0.00%)
GPU:  10.1 GB   1.4 GB ( 14.35%)

*** Current state:
RAM:   Used      Free     Total    Util
CPU:   4.9 GB   10.0 GB  15.7 GB  49.07% 
GPU:   9.0 GB    2.2 GB  11.2 GB 408.40%

I will only comment on ipyexperiments, and let others comment on the actual problem, since I haven’t delved into text yet.

So as you can see, comparing reports on different systems, the reported numbers are correct, ipyexperiments doesn’t do anything special, just measuring the reported by the system memory before and after. I’m going to switch the general RAM calculation to use tracemalloc, since it overcomes the issue of python internal caching.

And I see there is a bug in Util calculation, will fix shortly.

Also, you need to be aware of the peak memory, at the moment use ipygpulogger for that purpose. If peak memory is more than final consumed memory you may or may not have enough of RAM to support it. I wonder whether ipyexperiments needs to report that too. Have a look at ipygpulogger and see its numbers.

OK, observing closer, ipyexperiments is not deleting learn, because you must have had it defined before the experiment started. Unfortunately, ipyexperiments can only detect new variables, see: https://github.com/stas00/ipyexperiments#caveats
That’s why it’s not reclaiming the memory. If someone has ideas on how to overcome this problem I’m all ears.

So, please try again, using unique variables for the experiment, or del learn before you start the experiment.

Actually, learn here is the return value of the function train_lm (https://github.com/n-waves/ulmfit-multilingual/blob/master/ulmfit/pretrain_lm.py#L149).

Actually, learn here is the return value of the function train_lm

And as said earlier you need to get it deleted, it holds most of the occupied memory. So perhaps simply rename it to:

learn1 = expm.train_lm(num_epochs=1, bs=64, drop_mult=0.3, lr=5e-3)

so that ipyexperiments can delete it automatically. or delete it manually before the experiment is over.

1 Like

In the case reported above, the process dies (cuda OOM), so there may not be a learn object in this case.

I see, yes, then it probably never gets assigned no, and in which case the temp object would get destroyed and gc.collected() via ipyexperiments. As I suggested, start using ipygpulogger, split each call into its own cell and then you can easily trace the memory consumption of each invocation separately.

Usually, what works well is first creating the learn object, and then doing the training, so that if you hit OOM, then deleting it does reclaim a lot of memory.

Here is a memory profiler that taps into each epoch, and can be fine-tuned to each separate stage.

import tracemalloc, threading, torch, time, pynvml
from fastai.utils.mem import *
from fastai.vision import *

if not torch.cuda.is_available(): raise Exception("pytorch is required")

def preload_pytorch():
    torch.ones((1, 1)).cuda()
    
def gpu_mem_get_used_no_cache():
    torch.cuda.empty_cache()
    return gpu_mem_get().used

def gpu_mem_used_get_fast(gpu_handle):
    info = pynvml.nvmlDeviceGetMemoryInfo(gpu_handle)
    return int(info.used/2**20)

preload_pytorch()
pynvml.nvmlInit()

class PeakMemMetric(LearnerCallback):
    _order=-20 # Needs to run before the recorder

    def peak_monitor_start(self):
        self.peak_monitoring = True

        # start RAM tracing
        tracemalloc.start()

        # this thread samples RAM usage as long as the current epoch of the fit loop is running
        peak_monitor_thread = threading.Thread(target=self.peak_monitor_func)
        peak_monitor_thread.daemon = True
        peak_monitor_thread.start()
        
    def peak_monitor_stop(self):
        tracemalloc.stop()
        self.peak_monitoring = False
        
    def peak_monitor_func(self):
        self.gpu_mem_used_peak = -1

        gpu_id = torch.cuda.current_device()
        gpu_handle = pynvml.nvmlDeviceGetHandleByIndex(gpu_id)

        while True:
            gpu_mem_used = gpu_mem_used_get_fast(gpu_handle)
            self.gpu_mem_used_peak = max(gpu_mem_used, self.gpu_mem_used_peak)
            if not self.peak_monitoring: break
            time.sleep(0.001) # 1msec

    def on_train_begin(self, **kwargs):
        self.learn.recorder.add_metric_names(['cpu used',  'peak', 'gpu used',  'peak'])
                    
    def on_epoch_begin(self, **kwargs):
        self.peak_monitor_start()
        self.gpu_before = gpu_mem_get_used_no_cache()

    def on_epoch_end(self, **kwargs):
        cpu_current, cpu_peak =  list(map(lambda x: int(x/2**20), tracemalloc.get_traced_memory()))
        gpu_current = gpu_mem_get_used_no_cache() - self.gpu_before
        gpu_peak    = self.gpu_mem_used_peak      - self.gpu_before
        self.peak_monitor_stop()
        # The numbers are deltas in MBs (beginning of the epoch and the end)
        self.learn.recorder.add_metrics([cpu_current, cpu_peak, gpu_current, gpu_peak])
# against MNIST dataset
# assuming you already have data and model objects
learn = create_cnn(data, model, metrics=[accuracy], callback_fns=PeakMemMetric)
learn.fit_one_cycle(3, max_lr=1e-2)

gives:

Total time: 00:59
epoch	train_loss valid_loss accuracy cpu used peak gpu used peak
    1	0.325806   0.070334   0.978800	      0   2       80  6220
    2	0.093147   0.038905   0.987700	      0   2        2   914
    3	0.047818   0.027617   0.990600	      0   2        0   912

The numbers are deltas in MBs (beginning of the epoch and the end)

Note the huge surge of GPU RAM required on the first epoch

The measurements may require more thinking, but it’s a good start.

@AbuFadl, perhaps this will be helpful for your OOM debugging.

Thanks to @sgugger for helping me figure out the custom metrics.

3 Likes

really good initiative.
i am already dreaming about extra columns with timing metric for each phase of an epoch :slight_smile:

That should be trivial, see:
https://docs.fast.ai/metrics.html#Creating-your-own-metric
Let me know if you need help with creating it.

I may have found one source of GPU RAM fragmentation problem, which affects many due to fastai-MOOC recommending to make lots of checkpoints w/ learn.save()/load(), as each of them creates a hole in memory which is unlikely to be reused (if the size of the saved image grows from checkpoint to checkpoint - if it’s the same then it should be able to re-use the same fragment on subsequent loads).

At the moment I can’t see an easy way to remedy this on the fastai side, other than creating a checkpoint function that completely tears down the model from the learner object, removes it from CUDA and then reloads it.

Hopefully a proper solution can be implemented on the pytorch side. I started a thread here:

I also looked at the new shiny load_learner that @sgugger recently created, which is super-handy! perhaps a more elaborate version of load_learner can be created to perform checkpoints w/o creating fragmentation? But first let’s see what the pytorch devs have to suggest.

p.s. to understand how I found this issue, you can use ipyexperiments :

Here is the126MB GPU RAM overheard reported on resnet34/mnist, it should be close to 0.

screenshot

1 Like

This discussion helped me to understand that CUDA relocates free pages larger than 2MB and then re-uses them, so what I presumed to be a fragmentation scenario when the old model is not unloaded before the new one is loaded, is actually not the case.

It’s still a problem if you have only enough memory left to unload and then load the model, but if that’s the case once you load that model, you still have no memory left to do anything else, other than perhaps some extremely light inference.

So this basically was a false alarm and it’s no problem for the save/load model cycle to be inefficient memory allocation-wise (peak memory spike), it all gets balanced out on the subsequent calls to CUDA.

2 Likes

I added a new section to the gpu tutorial, so please feel free to contribute:
https://docs.fast.ai/tutorial.resources.html#todohelp-wanted

In particular this one would be very interesting to explore:

  • torch.utils.checkpoint can be used to use less GPU RAM by re-computing gradients. Here is a good in-depth article explaining this feature in tensorflow. We need pytorch/fastai examples. Contributions are welcome.

This one would be of a particular interest to someone who is struggling to fit their model into their GPU RAM, but don’t mind to wait a bit longer to recompute gradients more than once.

Thanks.

1 Like

I am looking for ways of speeding up training a language model on my computer and I came across a web site publishing results for benchmark tests for different GPUs. It shows that 1080ti card is as fast at training WordRNN as a Titan Pascal X and twice as fast as RTX 2070 that I currently use. Can anybody confirm that 1080ti is really this good for RNN as this chart says?

Source of the image is http://timdettmers.com/2019/04/03/which-gpu-for-deep-learning/

Hi @stas
I am having the opposite issue, meaning My GPU s are not utilized fully and instead training is taking quite a lots of CPUs.

We have recently purchased a Lambda-Quad and now when I run 4 different DL training one on each GPU, GPU usage goes frequently from 0 to about 15% back and force while all 24 CPUs on the system is being used at 100% constantly, and training in general is very slow (see the image below), indicating lack of enough CPUs is a bottle neck on this. do you think it is normal?
Ideally we expect to utilize more of our GPUs and not so many CPUs.

=== Software ===
python        : 3.7.1
fastai        : 1.0.51
fastprogress  : 0.1.21
torch         : 1.0.0
nvidia driver : 410.78
torch cuda    : 10.0.130 / is available
torch cudnn   : 7401 / is enabled

=== Hardware ===
nvidia gpus   : 4
torch devices : 4
  - gpu0      : 10989MB | GeForce RTX 2080 Ti
  - gpu1      : 10989MB | GeForce RTX 2080 Ti
  - gpu2      : 10989MB | GeForce RTX 2080 Ti
  - gpu3      : 10986MB | GeForce RTX 2080 Ti

=== Environment ===
platform      : Linux-4.15.0-47-generic-x86_64-with-debian-buster-sid
distro        : #50-Ubuntu SMP Wed Mar 13 10:44:52 UTC 2019
conda env     : base
python        : /home/.../anaconda3/bin/python
sys.path      : /home/.../anaconda3/bin
/home/.../anaconda3/lib/python37.zip
/home/.../anaconda3/lib/python3.7
/home/.../anaconda3/lib/python3.7/lib-dynload

/home/.../anaconda3/lib/python3.7/site-packages
/home/.../anaconda3/lib/python3.7/site-packages/IPython/extensions
/home/.../.ipython

Tue Apr 16 12:51:46 2019       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.78       Driver Version: 410.78       CUDA Version: 10.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce RTX 208...  On   | 00000000:19:00.0 Off |                  N/A |
| 54%   57C    P2    99W / 250W |   6052MiB / 10989MiB |     16%      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce RTX 208...  On   | 00000000:1A:00.0 Off |                  N/A |
| 29%   43C    P8     4W / 250W |     11MiB / 10989MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   2  GeForce RTX 208...  On   | 00000000:67:00.0 Off |                  N/A |
| 53%   58C    P2    79W / 250W |   3554MiB / 10989MiB |      9%      Default |
+-------------------------------+----------------------+----------------------+
|   3  GeForce RTX 208...  On   | 00000000:68:00.0  On |                  N/A |
| 33%   46C    P8    26W / 250W |    318MiB / 10986MiB |      7%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0     10046      C   python                                      6041MiB |
|    2      9931      C   python                                      3543MiB |
|    3      1409      G   /usr/lib/xorg/Xorg                            18MiB |
|    3      1437      G   /usr/bin/gnome-shell                          51MiB |
|    3      4183      G   /usr/lib/xorg/Xorg                           118MiB |
|    3      4313      G   /usr/bin/gnome-shell                         117MiB |
|    3      7498      G   /usr/bin/nvidia-settings                       6MiB |
+-----------------------------------------------------------------------------+

However, when I run the same code and data on a Windows Machine (specified below), the GPU memory is being taken constantly at about 60% more or less and the training is faster.

=== Software === 
python        : 3.7.1
fastai        : 1.0.51.dev0
fastprogress  : 0.1.20
torch         : 1.0.1
torch cuda    : 10.0 / is available
torch cudnn   : 7401 / is enabled

=== Hardware === 
torch devices : 1
  - gpu0      : GeForce GTX 1080 with Max-Q Design

=== Environment === 
platform      : Windows-10-10.0.16299-SP0
conda env     : base
python        : C:\ProgramData\Anaconda3\python.exe
sys.path      : C:\Users\sshahinf\Desktop\Python_code
C:\ProgramData\Anaconda3\python37.zip
C:\ProgramData\Anaconda3\DLLs
C:\ProgramData\Anaconda3\lib
C:\ProgramData\Anaconda3

C:\ProgramData\Anaconda3\lib\site-packages
C:\ProgramData\Anaconda3\lib\site-packages\win32
C:\ProgramData\Anaconda3\lib\site-packages\win32\lib
C:\ProgramData\Anaconda3\lib\site-packages\Pythonwin
C:\ProgramData\Anaconda3\lib\site-packages\IPython\extensions
C:\Users\...\.ipython
no nvidia-smi is found

Any hints what it might be, would be appreciated!