I think the defaults are just empirical best guesses. If you do learn.fine_tune?? in Jupyter Notebook it should show you the code, with the defaults and logics.
If in doubt, do a learn.lr_find first, set your lr accordingly, and experiment with different values.
I think “it will automatically change” does not mean the learning rate is automatically selected (because as you noticed, it’s always defaulting to 2e-3). Instead, it is related to how the learning rate is changing in fine_tune - it gets divided by two after unfreezing, and the discriminative learning rate is adjusted to train the first layers slower (division by lr_mult)