I’ve just started experimenting with the V1 library, and I’m finding it much slower to train an image classifier than it was with the V0.7 codebase.
I have a dataset of 2000 images that I’m doing binary classification with. I previously trained a model using the old V0.7 fast.ai code (CNN Learner pretrained with ResNet34) and it’s very fast. 1 epoch takes a couple seconds.
Now I’m using the same dataset in V1.0, again with the cnn_learner class and ResNet34, and 1 training epoch is taking ~1:30.
Example V0.7 Code
data = ImageClassifierData.from_paths(PATH, tfms=tfms_from_model(arch, sz), bs=BATCH_SIZE,
trn_name=‘train’, val_name=‘valid’, num_workers=0)
learn = ConvLearner.pretrained(arch, data, precompute=True)
learn.fit(LR, NUM_EPOCHS)
Example V1.0 Code
data = ImageDataBunch.from_folder(IMG_PATH, size=IMG_SIZE,
bs=BATCH_SIZE).normalize(imagenet_stats)
learn = cnn_learner(data, models.resnet34, metrics=accuracy)
learn.fit(NUM_EPOCHS, lr=LR)
Some additional information:
- Both are definitely running on my GPU
- Maybe this has something to do with the transformations/precompute? I haven’t added that into my V1.0 trial yet.
Thank you for any help.