Is it possible to borrow ideas from Knowledge Distillation and post-hoc explainability techniques, to train a shallow (xgboost) student model from a complex (GNN) teacher model?
Besides the performance issues, I’m asking this for adoption in platforms more comfortable with shallow model deployments.
If anyone has tried this, can you please point me to some relevant work? Thank you.