Comparative processing speed

I have been working on Lesson 1 and noted that when I ran the learning model it took quite some time in Colab. I posted on the forum and I had the settings at None rather than GPU. But I wanted to run it on my personal laptop. I have 2 laptops ROG and Razor. Both have 8 core. The ROG has a Radeon graphics card and the Razor has an Nvidia GeForce GTX 1060 Mobile. I ran the jupyter notebook on my ROG and it took about 10 minutes per epoch. When I ran it on my Razor it took about 8.5 minutes. I thought that was long for a GPU since the course numbers were much faster. So I tried to check the nvidia card with nvidia-smi and it said the driver was not the latest. I tried loading a new driver and it really messed up my Linux Ubuntu 19.10 display settings. After several reloads I found that I need to take the bootloader out of secure boot (in the BIOS). Once I did that then the Nvidia-smi connected. Now when I run the epochs they take aboyt 35 seconds per epoch. But don’t try to update to Ubuntu 19.10 without reenabling secure boot because the install will lock up. I haven;t tried to upgrade to 19.10 wirh the secure boot enabled and then unsecure because I think 19.10 and my Nvidia card may have a compatability problem. I may try some other time. right now I am running Ubuntu 19.04

I’m not sure what your question is, but if you want to run locally, you can use Jupyter Notebook, to use your own hardware. If you want to use Google Colab with your own hardware follow this guide: Link.

I didn’t really have a question I was just providing information that I found on the different speeds between the CPU vs GPU and how I resolved the Nvidia GPU access issue on my computer. It did appear to be an issue when I researched the internet and resolution was a bit difficult to find. Most of the resolution recommendations had reloading the Nvidia card driver (which caused me several problems). In my case it ended up being the secure boot sequence in the BIOS that prevented the Nvidia driver from being properly loaded. I also found that changing batch sizes doesn’t have a significant change in epoch time for the RESNET50 model. bs=4 - 2:31, bs=8 - 2:22, bs= 16 - 2:09 and bs=32 - 2:05. My GPU has 6GB of video memory and it runs out of memory > bs=32. I also noted that you do need to make sure your GPU memory is cleared from the last model prior to running the next different model. Again this is not a question but just providing some information I noted just in case other readers might be interested. .

Oh, my bad. The post actually is interesting; I didn’t know batch size doesn’t affect times so little. Cool.