I was experimenting with different learning rates for a tabular model and I noticed that my training was not repeatable.
For example, I would initialize the learner:
learn = tabular_learner(data, layers=[60], metrics=accuracy)
Then use fit_one_cycle for a given number of epochs and a certain learning rate.
If I repeat this process (initialize the learner and run fit_one_cycle) for the exact same inputs I am getting different results. Sometimes the accuracy may be 10% different than the previous case.
I assume there would be some variability because the weights are randomly initialized although I haven’t figured out where to set the seed for this. But should the deviation from one training to the next be so high?