Modeling polarization in public opinion through LLM-synthesized arguments and stance trees
Abstract
This article presents a methodology that leverages large language models (LLMs) to construct structured representations in the form of stance trees, that support inclusive e-deliberation by organizing collective opinions according to topic and stance. In our approach, LLMs play a central role by generating synthesized arguments that capture the reasoning underlying cohesive clusters of opinions, transforming informal and fragmented online discourse into structured and interpretable argumentative forms. Unlike previous work in argument mining, which primarily focuses on identifying and classifying existing argumentative components such as claims and premises, our framework emphasizes argument synthesis as a generative process. We introduce a dataset that links clusters of related opinions with their corresponding LLM-synthesized arguments, annotated by human experts for coherence, relevance, and argumentative quality. The experimental study evaluates the quality of these LLM-synthesized arguments using both human experts and LLMs as judges, examining the degree of consensus between human and automated assessments. We compare three open-source LLMs using both evaluation approaches. This resource and methodology provide a foundation for advancing research in generative argumentation and for developing deliberative tools that help policymakers and citizens better understand public reasoning and contrasting viewpoints.