First code example runs for a long time

I am running the first example (Chapter 1) from the new deep learning book. “fine_tune” should take a few minutes, but it is running for more than 30 minutes on my MacBookPro with the following configuration. (2.3GHz, 8 Core, 16GB RAM 2400 MHz DDR4, Radeon Pro 560X 4GB, Intel UHD Graphics 630 1536 MB)

from import *
path = untar_data(URLs.PETS)/‘images’

def is_cat(x): return x[0].isupper()
dls = ImageDataLoaders.from_name_func(
path, get_image_files(path), valid_pct=0.2, seed=42,
label_func=is_cat, item_tfms=Resize(224))

learn = cnn_learner(dls, resnet34, metrics=error_rate)

Hey Subrata!

As mentioned in the book, the lesson, and the documentation, all models should be run on GPU if you want to train them in a reasonable amount of time - running the model on CPU as you are doing is ~10-100x slower for most workloads and isn’t really feasible for most models. You should train all models on a machine which has an nvidia GPU, and there are many options such as Google Colab. Please look at the documentation or search the forums if you would like to understand the different options better.

1 Like