Is unchanged loss function really guiding loss efficiently when F1 not accuracy is the metric?

Lets say my organization needs a model to get good F1 score (or F-beta more generally) rather than Accuracy metric, and it’s a binary categorical output so we are using loss = binary cross-entropy. Let’s say the policy of my company says a balanced false pos and false neg rate is very very important, but plain old accuracy is not so important at all.

Will the same exact loss function for both metrics really work well when the computation for loss is absolutely unchanged? I am skeptical that the learner can really get the best F1 score since the learner only knows about loss computation which knows nothing about the F1 metric I have chosen. Presumably F1 will be different than accuracy depending on my dataset’s false positive, false negative, TP, TN.

**And what if my company later decides F-beta that is balanced “just so” in FP/FN rate is not 1:1 but now 1:3 for profit reasons or for ethical reasons is the most important, no longer a simple F1 1:1 balance. **

Does the loss I used before when F1 was optimized, still really let the learner find the optimal custom F-beta for my organization, with no change to the learned weights? I don’t see how this can be.

This question is not fast.ai-specific but is about deep learning. Thank you.

You are damn right.

Metric (what is displayed) and loss (what is optimized) are 2 different things.

But you can change them as you wish. After creating the learner …

# ------ Choose wisely the LOSS FUNCTION --------------
# Metric/Loss -> https://docs.fast.ai/metrics.html

#learn.loss_func=root_mean_squared_error  
learn.loss_func=mean_squared_logarithmic_error