Nice article. Bayesian uncertainty seems like it could pair well with pseudo-labeling additional unlabeled training data. Could use it throw out the unconfident pseudo-labels, create weights for the pseudo-labels based on confidence, or adjust how soft the pseudo-labels are based on confidence. I’ll have to do some experiments with my current dataset and see how well Bayesian pseudo-labels works.

I was reading through your code and saw that you were using `learn.predict_with_mc_dropout`

for the Bayesian uncertainty predictions. For example, in `predict_entropy`

:

```
def predict_entropy(img,n_times=10):
pred = learn.predict_with_mc_dropout(img,n_times=n_times)
probs = [prob[2].view((1,1) + prob[2].shape) for prob in pred]
probs = torch.cat(probs)
e = entropy(probs)
return e
```

however, I couldn’t find the definition for `predict_with_mc_dropout`

in your Colab notebook or github repository.