SHAP interpretability

I have been recently reading about SHAP as another way of providing model interpretability. I wonder if this would be something worth adding to the library?

There is a separate package over here but I bet the relevant code could be copied and pasted into the fastai codebase. Worst case scenario would be we need to have it as an optional dependency to use SHAP.

I will try playing around with it for PyTorch and fastai…

1 Like

SHAP is very promising,
not sure how see it to be integrated into library. You can use it as a standalone product.