Fix Learning rate or Annealing LR for publications

My team is doing research in some kinds of Medical Image analysis for paper publication. I’m adopting some Fast.ai techniques such as LR finder and Annealing LR also. However, in term of comparison between model (such as 2DCNN , 3DCNN etc…) I wonder is it ok to use different LR to find best result?

Some people argue that we have to use fix learning rate and WD for scientific comparison. Does anyone has experience or any idea?

To the best of my knowledge, choosing the best hyper parameters for each model for comparison is correct. So yes, you could use different lr. However it is good to have an ablation study to note if the difference in performance is due to different lr schedule or different model.

I think it depends on if you’re proposing an entirely new architecture or if you’re trying to demonstrate the effectiveness of 1-cycle in this domain.

The main reason to stick with a fixed learning rate is that you’re showing that your gains are coming from the architecture. But hyperparameter search of LR and WD is a massive search space even before you start to think about scheduling. I think as long as you’re clear that you use this method and that it’s giving you significant improvement then you can get away with the lr finder and 1-cycle and just cite. But I’m currently debating the same.