I have tried to execute the next piece several times:
Create unet_learner
using lr_find()
For each time I has gotten different plots.
I am establishing the seed with the next method:
def random_seed(seed_value, use_cuda):
np.random.seed(seed_value) # cpu vars
torch.manual_seed(seed_value) # cpu vars
random.seed(seed_value) # Python
if use_cuda:
torch.cuda.manual_seed(seed_value)
torch.cuda.manual_seed_all(seed_value) # gpu vars
torch.backends.cudnn.deterministic = True #needed
torch.backends.cudnn.benchmark = False
In addition, I am also stablishing the seed in: RandomSplitter(valid_pct=0.1,seed=2020)
So, I don know why I am not getting reproducibility.
Why would two runs give the same plot? LR Finder runs a mock training that has some randomness + the head of the model is randomly initialized. Unless you go out of your way to set the seeds before the two runs, you won’t get the same graphs/suggestions exactly.
Along with this you should probably re-seed between each call to lr_find as well to be safe. Also please do not make duplicate topics. We’re not ignoring it, we are trying to figure it out ourselves.
You are just throwing two lines of codes without explaining to us what you are doing, so we are replying the best we can. You said you are creating a unet_learner (that where the random part is added) then running lr_find (which is random anyway).
The base algorithm to train model is called SGD and S is for stochastic. You should never expect to always get the same results because of that.
To get two identical runs, follow the lines of codes I have given in the topic linked. They have been confirmed to work.
The exact method he mentions is the GCP instructions, however the pip install of the git repository will do the same thing, as I mentioned just a minute ago.