Uncertainty in Deep Learning (Bayesian networks, Gaussian processes)

I’m looking into building in some uncertainty into a natural language classification problem I’m working on.

Has anyone had experience doing this?

I’m currently in the process of digging into the fastai library to find out where would be best to implement something like Monte Carlo Dropout at test time (similar to the above comments).

1 Like