Workshop paper

When Data is the Algorithm: A Systematic Study and Curation of Preference Optimization Datasets

Abstract

Aligning large language models (LLMs) is a central objective of post-training, often achieved through reward modeling and reinforcement learning methods. Among these, direct preference optimization (DPO) has emerged as a widely adopted technique that fine-tunes LLMs on preferred completions over less favorable ones. While most frontier LLMs do not disclose their curated preference pairs, the broader LLM community has released several open-source DPO datasets, including TuluDPO, UltraFeedback, ORPO, HelpSteer, and Code-Preference-Pairs. However, the construction process behind these datasets often lacks valuable metadata, design rationales, and quality annotations. This missing context makes it difficult to understand how preferences were selected, what task types they span, and how well they reflect human judgement on a per-sample-level. In this work, we present the first comprehensive, data-centric analysis of open-source DPO corpora. We leverage the Magpie framework to annotate each sample for task category, input quality, and preference reward, a reward-model-based signal that validates the preference order without relying on human annotations. This enables a scalable, fine-grained inspection of preference quality across datasets, revealing structural and qualitative discrepancies in reward margins. Building on these insights, we systematically curate a new DPO mixture, UltraMix, that draws selectively from all five corpora while removing noisy or redundant samples. UltraMix is 30% smaller than the best-performing individual dataset yet exceeds its performance across key benchmarks. We publicly release all annotations, metadata, and our curated mixture to facilitate future research in data-centric preference optimization.