I use the fit_one_cycle method with a tabular learner.
I want to make the result of it reproducible. I already did:
def random_seed(seed_value, use_cuda): #gleaned from multiple forum posts
np.random.seed(seed_value) # cpu vars
torch.manual_seed(seed_value) # cpu vars
random.seed(seed_value) # Python
if use_cuda: torch.cuda.manual_seed_all(seed_value) # gpu
However I still get different results, if define the learner again and use fit_one_cycle again.
I know what you are trying to do, though I do not know how to do it. My approach has been to generate the weights save them off, and save specific inputs off, those specific ones if I want to make something reproducible. Definitely not as simple as we want it to be though. This would help me as well, if someone has found a way to do this…
Personally I do run based reproducibility though. Inputs and weights are random between runs, but in a single run I make sure the same weights are loaded into different models(same architecture), and same inputs are used across models. This has introduced enough randomness that I get some interesting failure cases that have slipped up on me. (Ie, being 1-off in my vocab length), though on each run I can compare everything.
My understanding is that since any new layer is initially filled with random weights, then you can never achieve exactly the same results every time, but if we’ve gotten our hyper parameters correct then it shouldn’t matter as we’ll get almost the same results every time.
If it’s wildly different each time then there’s an element of luck involving the initial weights which points back to hyper parameters not being optimal.