Bridging the Faithfulness Gap in Prototypical Models

Link:

https://aclanthology.org/2025.insights-1.9/

Title:

Bridging the Faithfulness Gap in Prototypical Models

Abstract:

Prototypical Network-based Language Models (PNLMs) have been introduced as a novel approach for enhancing interpretability in deep learning models for NLP. In this work, we show that, despite the transparency afforded by their case-based reasoning architecture, current PNLMs are, in fact, not faithful, i.e. their explanations do not accurately reflect the underlying model’s reasoning process. By adopting an axiomatic approach grounded in the seminal works’ definition of faithfulness, we identify two specific points in the architecture of PNLMs where unfaithfulness may occur. To address this, we introduce Faithful Alignment (FA), a two-part framework that ensures the faithfulness of PNLMs’ explanations. We then demonstrate that FA achieves this goal without compromising model performance across a variety of downstream tasks and ablation studies.

Citation:

Koulogeorge, A., Xie, S., Hassanpour, S. and Vosoughi, S., 2025, May. Bridging the Faithfulness Gap in Prototypical Models. In The Sixth Workshop on Insights from Negative Results in NLP (pp. 86-99).

Previous
Previous

Longitudinal EMA study using deep learning to predict non-prescribed opioid use, treatment retention, and medication nonadherence in patients receiving medication treatment for opioid use disorder.

Next
Next

Communication Makes Perfect: Persuasion Dataset Construction via Multi-LLM Communication