CuDNN Error

(Andrea de Luca) #1

I encountered the error you may see below a couple of times on two different systems.

Both times, that error produced the following consequences:

  • Occupied cuda memory was not released following a kernel restart

  • The gpu remained with some load on it, with no other process using it (it’s not even, in both systems, connected to a monitor).

  • The notebook server refused to exit on Ctrl-C

  • Once I manually killed the notebook server, the system hanged up completely.

Any idea about what could have caused that error? Note that it almost completed the epoch, and memory occupation rose to 80% but remained stable till the error.

Something similar was already asked: Cudnn_status_execution_failed but no one answered. Let me try again.

0 Likes

#2

what version of pytorch?
you can always ask on pytroch forum, I think they would be more responsive:

0 Likes

(Andrea de Luca) #3

Thank you for your reply. It’s the version which comes with fastai, that is 0.3.1

I’ll ask on pytorch forums, too, but I was interested in knowing if some other fastai user encountered it.

0 Likes

(Junlin) #4

Hi balnazzr. Did you solve the problem? I’ve encountered the same problem again and again, in different situations.

0 Likes

(Andrea de Luca) #5

No, frankly. It seems to have been solved with newer versions of cudnn and cuda.

0 Likes

(Junlin) #6

Thank your for reply, balnazzar.

I’ve been using fast.ai v0.7 following this guide, and the v0.7 library seems like using the older version of CUDA and cuDNN? So you just upgrade CUDA and cuDNN then the problem just go away?

0 Likes

(Andrea de Luca) #7

Yes, BUT I changed hardware too. Same GPUs, but newer host.

0 Likes

(Junlin) #8

I see. Thank you!

1 Like

#9

at 2020/1/1, I just encounter this issue, with screenshot below


cuda 9.0

0 Likes