NOTE: I am using resnext50 as my architecture my batch size is 58
learn = ConvLearner.pretrained(arch, data, precompute=True, ps=0.5)
learn.fit(1e-2, 2) - I get an accuracy of 89%
learn.precompute = False
learn.fit(1e-2, 5, cycle_len = 1) - I get an accuracy of just 90%
I tried to unfreeze the layers and do something to improve my code
learn.unfreeze()
lr=np.array([1e-4,1e-3,1e-2])
learn.fit(lr, 3, cycle_len=1)
When I run the above I get an out of memory error.
NOTE:Even if I replicate & run exact code a Jeremy provided I get an out of memory error
This has happened to me. I used a GTX 1060 6GB for the duration of the class, which always hit the limit when training with differential learning rates. Consistently. The way to fix this is to use a smaller batch size. I usually divide the batch size by two until it stops running out of memory. Dogbreeds stabilized at about bs=32.
I think the main cause of my out of memory is because of unfreezing the layers and applying different learning rates and this caused the out of memory exception, I have since stopped unfreezing the layers and training with 1e-2 and dont get out of memory error any more