Platform: Local Server - Ubuntu

I am having issues with this but I can’t pin point the problem with my 1080i when running 10_nlp.ipynb. First all it’s memory was grabbed and the next call to it failed as no memory available. Then Sylvain mentioned the .to_fp16() which I removed and now none of the notebooks seem to activate the GPU it’s as if it’s not there, Even after many redeployments of latest core and 2 each day with the pip dev install . While writing this I thought I must go to basics so I shall reboot and see if that changes anything.

OK after reboot GPU 1080i is being activated, but now I am back to CUDA out of memory at the learn.fit_one_cycle(1, 2e-2) cell in the Fine tuning the language model of 10_nlp.ipynb again.

I’ll try to reduce batch size bs=6 and try again after stopping the running process to clear GPU memory.

OK that works so far using 1691MiB as I watch nvidia-smi every second

Changed the bs=64 at bs=32 around 3400MiB and 30 mins predict cycle at bs=64 using 7141MiB on 1080i were as with bs=6 1hr 20m per cycle

1 Like