Not sure what I’m doing wrong. I’m currently training my model for 5 epochs with each epoch lasting about 30 minutes. This is way longer than I get with the 100k dataset.
Each epoch calculates 312501 elements even though my dataloaders batch size is set to 64. One weird thing that I’ve noticed is that although my batch size is set as 64, when I use dls.show_batch()
I only get 10 entries. I believe the reason for this is that my model is performing a gradient descent without it being stochastic. Any idea how I can change that?
Thanks in advance.