Proto-lm: A Prototypical Network-Based Framework for Built-in Interpretability in Large Language Models

Link:

https://aclanthology.org/2023.findings-emnlp.261/

Title:

Proto-lm: A Prototypical Network-Based Framework for Built-in Interpretability in Large Language Models

Abstract:

Large Language Models (LLMs) have significantly advanced the field of Natural Language Processing (NLP), but their lack of interpretability has been a major concern. Current methods for interpreting LLMs are post hoc, applied after inference time, and have limitations such as their focus on low-level features and lack of explainability at higher-level text units. In this work, we introduce proto-lm, a prototypical network-based white-box framework that allows LLMs to learn immediately interpretable embeddings during the fine-tuning stage while maintaining competitive performance. Our method's applicability and interpretability are demonstrated through experiments on a wide range of NLP tasks, and our results indicate a new possibility of creating interpretable models without sacrificing performance. This novel approach to interpretability in LLMs can pave the way for more interpretable models without the need to sacrifice performance. We release our code at https://github.com/yx131/proto-lm.

Citation:

Xie S, Vosoughi S, Hassanpour S. Proto-lm: A Prototypical Network-Based Framework for Built-in Interpretability in Large Language Models. In: The 2023 Conference on Empirical Methods in Natural Language Processing; 2023.

Previous
Previous

Response to Commentary on “Automated Classification of Fat-infiltrated Axillary Lymph Nodes on Screening Mammograms”

Next
Next

Non-Metastatic Axillary Lymph Nodes Have Distinct Morphology and Immunophenotype in Obese Breast Cancer patients at Risk for Metastasis