TPU vs GPU my observation and Special code for TPU (?)

Hi,
I am using colab-pro for creating models. After working with the Lesson_02, I just tried with my own dataset which contains 6-classes of objects. Total images I have is around 4000. I tried to run the same code on GPU and TPU separately and got the observation as attached.

Somebody in an older thread here, said that we need to use a certain code for running our code on TPU. If that is the case, can somebody please help me with that.

1. With TPU
image

2. With GPU :innocent:
image

As far as I understand, out-of-the-box PyTorch doesn’t like TPUs. But, you can make it work quite easily, as this notebook shows.

2 Likes

Hi @Marceline , Thanks for the link to the notebook.
However TPU was the one which shown a good performance w.r.t time. (?)

kinda like @Marceline said, fastai 1 and (more specifically) the version of pytorch it used did not “do” TPUs. I’m not aware of that being different now.

  1. It would make sense to me that you were seeing that behaviour because TPU instances on Colab were coming with pytorch/xla installed, but looking at the github history on pytorch/xla, the last changes on the colab notebooks were 2 months ago to make sure everything worked with their new installation method, so it would appear not.

  2. I’d expect TPU to be a fair bit better than 2x faster (although iirc you can actually go as far as getting worse performance of TPU if your batch size is too small).

not sure though. you still can’t get Colab Pro outside of USA so i can’t play with it. #sulking

Hi @joedockrill, this is too late to reply. Got busy.

I got your point, Ofc I am working with a data set that is at least not smaller :wink:

Anyhow, FYI, Colab Pro is working even outside the USA. :blush:

Hi,

I did a search about TPU v3 vs NVIDIA 32GB GPU V100. There are papers and blogs about that but I did not find a simple answer to this question :

How many times is TPU v3 faster than NVIDIA 32GB GPU V100 for training transformer models like BERT?

What do you think? Thanks.