Bridging the Faithfulness Gap in Prototypical Models
Link:
https://aclanthology.org/2025.insights-1.9/
Title:
Bridging the Faithfulness Gap in Prototypical Models
Abstract:
Prototypical Network-based Language Models (PNLMs) have been introduced as a novel approach for enhancing interpretability in deep learning models for NLP. In this work, we show that, despite the transparency afforded by their case-based reasoning architecture, current PNLMs are, in fact, not faithful, i.e. their explanations do not accurately reflect the underlying model’s reasoning process. By adopting an axiomatic approach grounded in the seminal works’ definition of faithfulness, we identify two specific points in the architecture of PNLMs where unfaithfulness may occur. To address this, we introduce Faithful Alignment (FA), a two-part framework that ensures the faithfulness of PNLMs’ explanations. We then demonstrate that FA achieves this goal without compromising model performance across a variety of downstream tasks and ablation studies.
Citation:
Koulogeorge, A., Xie, S., Hassanpour, S. and Vosoughi, S., 2025, May. Bridging the Faithfulness Gap in Prototypical Models. In The Sixth Workshop on Insights from Negative Results in NLP (pp. 86-99).