Meet Ranger - RAdam + Lookahead optimizer

I use seed and I can get deterministic results between run.

I’ve used fast.ai’s load https://github.com/fastai/fastai/blob/master/fastai/basic_train.py#L262

Which in turn uses Pytorch load
torch.load

And Pytorch uses a Python’s Pickle https://docs.python.org/3/library/pickle.html

Oh, I thought you were talking about an alternative loading method. As you pointed out the param duplication, I looked more closely the PyTorch optimizer implementation and made a few changes to my implementation.

It should solve the duplication issue as well as improve support for base optimizer method support (so that’s it’s a proper wrapper :sweat_smile:)
Let me know if the commit helps!

2 Likes

Most of the Lookahead / Ranger impl have some issues with state dict save/load and also adding parameters after Optimizer creation via add_param_group causing a crash.

I’ve gone through a few iterations starting from https://github.com/alphadl/lookahead.pytorch (which is closer to correct than the lonePatient impl).

Currently state is here, tested resumes with different optimizers. Could still be issues but I think it’s close :slight_smile:

EDIT: Also, added support in this one to resume a checkpoint with Lookahead(OptA) that was created with just OptA

4 Likes

Wow, that’s terrific job, works like a charm! I did save/load and got exact same validation loss.

1 Like

slow_weights are now stored in param_groups, I like that trick!

1 Like

Yes, I figured it would give more coherence to the implementation since we inherit from Optimizer. And it allows us to use inherited methods to reduce code base :wink: The only method that I’m not overwriting is a private one to ensure smooth param_group addition for slow weights.

Also, you might want to check the discussion on this thread. I added a param synchronization method for external calls so that users can choose the state they want for their model to be evaluated. My current version is available here!

Cheers

2 Likes

Your procedure to use ranger is not working fro me i am getting Typeerror:module object is not callable when i am running this line —>optar=partial(Ranger).

Thanks for your work !

Using Ranger on my model, I did a save/load , and start the training again. And the training loss behaves in a completely different way after this save/load step (training speed decreases).

Anyone has the same problem ? I think the saved optimizer is not saved correctly. I have better performance if I specify with_opt=False when I load the model.

1 Like

Thanks for the feedback! It sounds like we are dragging around duplicate slow weights…I will take a look and try to fix!

I didn’t get a chance to test it, but I believe the fix is to simply leverage what @rwightman did and move the slow weights into a state param group (which as usual, is brilliant coding by him).
That way they are re-loaded properly and should correct this issue.
Will try and do that tomorrow but at least I believe I know the issue and by copying @rwightman excellent idea, should fix.

I think the optimizer does not work with pretrained models when the model has different layer groups. For some reason, it stops after one epoch. Could you please look into this?

I’m testing an update now that should handle layer groups. What model are you using and I’ll test a run with the fix.
Thanks!

I am using an EfficientNetB3 from the Luke Melas repository with the following split:
learn.split( lambda m: (m._conv_head,) )

1 Like

I’ve posted a new version of Ranger - it has improved support for layer groups and is a much tighter codebase all around (one pass handling at param level, no repeat loops, slow weights moved into state dict, etc).

Can you please see if that resolves your issue?

New version 9.3.19

*Also thanks to @rwightman as I leveraged some of his code ideas related to putting the slow weights into a state dictionary vs how lonepatient originally did it.
I’m working to integrate @fgfm idea regarding partial sync as well next.

3 Likes

Thanks! When I get the chance, I will try it out with layer groups and let you know!

2 Likes

Is there any kind of early consensus around how to handle .fit_one_cycle() in combination with RAdam or Ranger yet?

1 Like

We have a new fit function, fit_fc. Grab the most recent version via dev install of the library to use :slight_smile: otherwise I believe there was a hack to allow for one cycle to run in a way similar to fit_fc

2 Likes

That sounds super interesting, @muellerzr! Found it and will try it out for sure - thanks!

I am trying the new version right now, but I think there may be a few bugs. I am trying this with a pretrained ResNet50.

I first got a KeyError: 'k' so I changed group['k'] to self.k. I am unsure if that is the right fix. It was then running for k steps and then said `KeyError: ‘slow_buffer’. I am not sure what’s going on here.

Please let me know if you need more information and if you have a fix.

It seems the error actually occurs when the model is frozen. If the model is unfrozen, it works perfectly fine.

It would be amazing if it worked for frozen models too.

1 Like