Neural generators of sparse local linear models for achieving both accuracy and interpretability Neural generators of sparse local linear models for achieving both accuracy and interpretability

Yuya Yoshikawa, and Tomoharu Iwata. “Neural generators of sparse local linear models for achieving both accuracy and interpretability.” Information Fusion (2021). https://doi.org/10.1016/j.inffus.2021.11.009

For reliability, it is important for the predictions made by machine learning methods to be interpretable by humans. In general, deep neural networks (DNNs) can provide accurate predictions, although it is difficult to interpret why such predictions are obtained by the DNNs. On the other hand, interpretation of linear models is easy, although their predictive performance is low because real-world data are often intrinsically non-linear. To combine both the benefits of the high predictive performance of DNNs and the high interpretability of linear models into a single model, we propose neural generators of sparse local linear models (NGSLL). Sparse local linear models have high flexibility because they can approximate non-linear functions. NGSLL generates sparse linear weights for each sample using DNNs that take the original representations of each sample (e.g., word sequence) and their simplified representations (e.g., bag-of-words) as input. By extracting features from the original representations, the weights can contain rich information and achieve a high predictive performance. In addition, the prediction is interpretable because it is obtained through the inner product between the simplified representations and the sparse weights, where only a small number of weights are selected by our gate module in NGSLL. In experiments on image, text and tabular datasets, we demonstrate the effectiveness of NGSLL quantitatively and qualitatively by evaluating the prediction performance and visualizing generated weights.

Members