TY - GEN
T1 - Navigating the Political Compass
T2 - 63rd Annual Meeting of the Association for Computational Linguistics, ACL 2025
AU - Helwe, Chadi
AU - Balalau, Oana
AU - Ceolin, Davide
N1 - Publisher Copyright:
© 2025 Association for Computational Linguistics.
PY - 2025/1/1
Y1 - 2025/1/1
N2 - Large Language Models (LLMs) have become ubiquitous in today's technological landscape, boasting a plethora of applications and even endangering human jobs in complex and creative fields. One such field is journalism: LLMs are being used for summarization, generation, and even fact-checking. However, in today's political landscape, LLMs could accentuate tensions if they exhibit political bias. In this work, we evaluate the political bias of the 15 most-used multilingual LLMs via the Political Compass Test. We test different scenarios, where we vary the language of the prompt while also assigning a nationality to the model. We evaluate models on the 50 most populous countries and their official languages. Our results indicate that language has a strong influence on the political ideology displayed by a model. In addition, smaller models tend to display a more stable political ideology, i.e. ideology that is less affected by variations in the prompt.
AB - Large Language Models (LLMs) have become ubiquitous in today's technological landscape, boasting a plethora of applications and even endangering human jobs in complex and creative fields. One such field is journalism: LLMs are being used for summarization, generation, and even fact-checking. However, in today's political landscape, LLMs could accentuate tensions if they exhibit political bias. In this work, we evaluate the political bias of the 15 most-used multilingual LLMs via the Political Compass Test. We test different scenarios, where we vary the language of the prompt while also assigning a nationality to the model. We evaluate models on the 50 most populous countries and their official languages. Our results indicate that language has a strong influence on the political ideology displayed by a model. In addition, smaller models tend to display a more stable political ideology, i.e. ideology that is less affected by variations in the prompt.
UR - https://www.scopus.com/pages/publications/105028637621
U2 - 10.18653/v1/2025.findings-acl.883
DO - 10.18653/v1/2025.findings-acl.883
M3 - Conference contribution
AN - SCOPUS:105028637621
T3 - Proceedings of the Annual Meeting of the Association for Computational Linguistics
SP - 17179
EP - 17204
BT - Findings of the Association for Computational Linguistics
A2 - Che, Wanxiang
A2 - Nabende, Joyce
A2 - Shutova, Ekaterina
A2 - Pilehvar, Mohammad Taher
PB - Association for Computational Linguistics (ACL)
Y2 - 27 July 2025 through 1 August 2025
ER -