Robert Farrell, Rajarshi Das, et al.
AAAI-SS 2010
As Large Language Models (LLMs) become deeply integrated into human life and increasingly influence decision-making, it's crucial to evaluate whether and to what extent they exhibit subjective preferences, opinions, and beliefs.
These tendencies may stem from biases within the models, which may shape their behavior, influence the advice and recommendations they offer to users, and potentially reinforce certain viewpoints.
This paper presents the Preference, Opinion, and Belief survey (\benchmark{}), a benchmark developed to assess LLMs' subjective inclinations across societal, cultural, ethical, and personal domains.
We applied our benchmark to evaluate leading open- and closed-source LLMs, measuring desired properties such as reliability, neutrality, and consistency.
In addition, we investigated the effect of increasing the test-time compute, through reasoning and self-reflection mechanisms, on those metrics.
While effective in other tasks, our results show that these mechanisms offer only limited gains in our domain.
Furthermore, we reveal that newer model versions are becoming less consistent and more biased toward specific viewpoints, highlighting a blind spot and a concerning trend.
POBs: https://ibm.github.io/POBS
Robert Farrell, Rajarshi Das, et al.
AAAI-SS 2010
Lina Berrayana, Sean Rooney, et al.
ACL 2025
Navve Wasserman, Roi Pony, et al.
ACL 2025
Chen-chia Chang, Wan-hsuan Lin, et al.
ICML 2025