[Article] Bayesian Neural Networks experiments using Fastai

#1

Hello there ! :slight_smile:

I just wrote an article introducing Bayesian Neural Networks, how they work and how they can be leveraged to get uncertainty estimates, almost for free, using MC Dropout !
I have then done a few experiments in Fastai, on image, tabular and text data.

This stuff is fun and I recommend you to have a read, because it opens the way for many things, such as active learning, ethics, parcimony, etc …

Tell me what you think :smiley:

5 Likes

Image classification - how to detect 'none' class
#2

Nice article. Bayesian uncertainty seems like it could pair well with pseudo-labeling additional unlabeled training data. Could use it throw out the unconfident pseudo-labels, create weights for the pseudo-labels based on confidence, or adjust how soft the pseudo-labels are based on confidence. I’ll have to do some experiments with my current dataset and see how well Bayesian pseudo-labels works.

I was reading through your code and saw that you were using learn.predict_with_mc_dropout for the Bayesian uncertainty predictions. For example, in predict_entropy:

def predict_entropy(img,n_times=10):
    pred = learn.predict_with_mc_dropout(img,n_times=n_times)
    probs = [prob[2].view((1,1) + prob[2].shape) for prob in pred]
    probs = torch.cat(probs)
    e = entropy(probs)
    return e

however, I couldn’t find the definition for predict_with_mc_dropout in your Colab notebook or github repository.

0 Likes

#3

I totally agree with you ! Actually, the article which got me motivated was this one about histopathological data from Nature : https://www.nature.com/articles/s41598-019-50587-1?fbclid=IwAR3-ns4LGBWPOb_AK9sNdPM8X3-7HF1OInobzXPozbjzIHFQYILvF0hE_wM#Abs1

In this article, they do what you said, and study how confidence is an excellent indicator of mislabelled data : misclassified images with low uncertainty were most likely mislabelled.

Yeah about that learn.predict_with_mc_dropout I noticed when working on it that someone had sent a PR, which you can find here PR: Ability to use dropout at prediction time (Monte Carlo Dropout)

I will implement a parallelized version and also post the code, so don’t worry about this little piece of code ^^

0 Likes