Communication Makes Perfect: Persuasion Dataset Construction via Multi-LLM Communication

Link:

https://aclanthology.org/2025.naacl-long.203/

Title:

Communication Makes Perfect: Persuasion Dataset Construction via Multi-LLM Communication

Abstract:

Large Language Models (LLMs) have shown proficiency in generating persuasive dialogue, yet concerns about the fluency and sophistication of their outputs persist. This paper presents a multi-LLM communication framework designed to enhance the generation of persuasive data automatically. This framework facilitates the efficient production of high-quality, diverse linguistic content with minimal human oversight. Through extensive evaluations, we demonstrate that the generated data excels in naturalness, linguistic diversity, and the strategic use of persuasion, even in complex scenarios involving social taboos. The framework also proves adept at generalizing across novel contexts. Our results highlight the framework’s potential to significantly advance research in both computational and social science domains concerning persuasive communication.

Citation:

Ma, W., Zhang, H., Yang, I., Ji, S., Chen, J., Hashemi, F., Mohole, S., Gearey, E., Macy, M., Hassanpour, S. and Vosoughi, S., 2025. Communication is all you need: Persuasion dataset construction via multi-llm communication. arXiv preprint arXiv:2502.08896

Previous
Previous

Bridging the Faithfulness Gap in Prototypical Models

Next
Next

ImpScore: A Learnable Metric For Quantifying The Implicitness Level of Sentence