to obtain the probabilities from our predictions. I always accepted that the output of TTA() or predict() is in logarithmic scale. However now I am wondering where this logarithm is in the code. I looked at the predict function in model.py, but I didn’t see where the logarithm was applied.

Is the logarithm somehow inherent because we use logloss as loss function?

What would happen if one overwrote the default loss function, i.e. by setting:

I looked at the predict function in model.py, but I didn’t see where the logarithm was applied.

Since you are calling learn.TTA(), you should look at predict_with_targs function in model.py. This function in turn calls the model to do the predictions.

The model in this context is the arch argument passed to the ConvLearner object. It can be resnet34, resnet50, etc. For example, a resnet34 model gets initialized and created through the ConvnetBuilder class in conv_learner.py.

Back to the predict_with_targs function, this is the line in the function that calls the model and do predictions:

for *x,y in iter(dl): res.append([get_prediction(m(*VV(x))),y])

m is the model function such as resnet34. The input, x is pass to the model to run prediction.

However now I am wondering where this logarithm is in the code.

When we run prediction, the final activation function after the fully connected layer in the model will output predictions. The activation function was created by get_fc_layers and create_fc_layer function in conv_learner.py.

The activation function depends on the type of classification problem:

LogSoftmax function for multi-class classification. This is where the logarithm was applied.

Sigmoid function for binary classification.

Is the logarithm somehow inherent because we use logloss as loss function?