CNN training significantly slower with Fastai V1.0 than V0.7

#1

I’ve just started experimenting with the V1 library, and I’m finding it much slower to train an image classifier than it was with the V0.7 codebase.

I have a dataset of 2000 images that I’m doing binary classification with. I previously trained a model using the old V0.7 fast.ai code (CNN Learner pretrained with ResNet34) and it’s very fast. 1 epoch takes a couple seconds.

Now I’m using the same dataset in V1.0, again with the cnn_learner class and ResNet34, and 1 training epoch is taking ~1:30.

Example V0.7 Code
data = ImageClassifierData.from_paths(PATH, tfms=tfms_from_model(arch, sz), bs=BATCH_SIZE,
trn_name=‘train’, val_name=‘valid’, num_workers=0)

learn = ConvLearner.pretrained(arch, data, precompute=True)
learn.fit(LR, NUM_EPOCHS)

Example V1.0 Code
data = ImageDataBunch.from_folder(IMG_PATH, size=IMG_SIZE,
bs=BATCH_SIZE).normalize(imagenet_stats)
learn = cnn_learner(data, models.resnet34, metrics=accuracy)
learn.fit(NUM_EPOCHS, lr=LR)

Some additional information:

  1. Both are definitely running on my GPU
  2. Maybe this has something to do with the transformations/precompute? I haven’t added that into my V1.0 trial yet.

Thank you for any help.

0 Likes

#2

There is no precompute in V1 as it’s incompatible with data augmentation and was confusing beginners a lot (difference between precompute and pretrained was misunderstood). That’s why you have a difference of times.

1 Like

#3

Good to know, thank you. I’m looking at a 20x slowdown with my application. Are there any workarounds you’d recommend so I can iterate more quickly?

0 Likes