Hello, I was wondering if a LR finder for fine-tuning is in the works? To the best of my understand, the way to use a pretrained network is:
- Freeze pretrained layers, and train added layers (LR finder works fine here)
- Unfreeze pretrained layers and train whole network with differential training rates, as you can only have one LR for the LR finder, we wouldn’t be able to find the optimal learning rates for both the pretrained layers (which usually require a smaller LR) and the layers that you have added.
Would be happy to make a PR, as the project I’m currently working on needs this