with size = 224
learn = ConvLearner(data, models.resnet50, metrics=accuracy)
learn.fit_one_cycle(1)
Took 12 minutes. Not blazing fast. But a huge improvement over a CPU and free.
epoch train loss valid loss accuracy
1 0.045373 0.029649 0.989500
CPU times: user 10min 21s, sys: 1min 45s, total: 12min 7s
Wall time: 12min 8s
Then i took 14 more minutes to run:
learn.unfreeze()
learn.fit_one_cycle(1, slice(1e-5,3e-4), pct_start=0.05)
Not so fast.
epoch train loss valid loss accuracy
1 0.026436 0.016208 0.993500
CPU times: user 12min 18s, sys: 1min 46s, total: 14min 5s
Wall time: 14min 9s
accuracy(*learn.TTA())
CPU times: user 4min 44s, sys: 8.48 s, total: 4min 53s Wall time: 4min 53s
tensor(0.9965)
So 30 minutes all together to run very few epochs.
The same test on Paperspace’s basic P4000 GPU setup took a little over 8 minutes.