Gaussian Process Regression With Interpretable Sample-wise Feature Weights Gaussian Process Regression With Interpretable Sample-wise Feature Weights

Yuya Yoshikawa, and Tomoharu Iwata. “Gaussian Process Regression With Interpretable Sample-wise Feature Weights.” IEEE Transactions on Neural Networks and Learning Systems (2021).

Gaussian process regression (GPR) is a fundamental model used in machine learning (ML). Due to its accurate prediction with uncertainty and versatility in handling various data structures via kernels, GPR has been successfully used in various applications. However, in GPR, how the features of an input contribute to its prediction cannot be interpreted. Here, we propose GPR with local explanation, which reveals the feature contributions to the prediction of each sample while maintaining the predictive performance of GPR. In the proposed model, both the prediction and explanation for each sample are performed using an easy-to-interpret locally linear model. The weight vector of the locally linear model is assumed to be generated from multivariate Gaussian process priors. The hyperparameters of the proposed models are estimated by maximizing the marginal likelihood. For a new test sample, the proposed model can predict the values of its target variable and weight vector, as well as their uncertainties, in a closed form. Experimental results on various benchmark datasets verify that the proposed model can achieve predictive performance comparable to those of GPR and superior to that of existing interpretable models and can achieve higher interpretability than them, both quantitatively and qualitatively.