Keeping an Eye on LLM Unlearning: The Hidden Risk and Remedy
Jie Ren, Zhenwei Dai, et al.
NeurIPS 2025
Large language models (LLMs) are powerful tools capable of handling diverse tasks. However, their evaluation remains challenging due to the vast and often confusing range of available benchmarks. This complexity not only increases the risk of benchmark misuse and misinterpretation but also demands substantial effort from LLM users, including researchers, practitioners, and non-AI companies, seeking the most suitable benchmarks for their specific needs. To address these issues, we introduce \texttt{BenchmarkCards}, an intuitive and validated documentation framework that systematically captures critical benchmark attributes such as objectives, methodologies, data sources, and limitations. Through user studies with benchmark creators and users, we show that \texttt{BenchmarkCards} can simplify benchmark selection and enhance transparency, facilitating more informed decision-making in evaluating LLMs.
Jie Ren, Zhenwei Dai, et al.
NeurIPS 2025
Hazar Yueksel, Ramon Bertran, et al.
MLSys 2020
Tian Gao, Amit Dhurandhar, et al.
NeurIPS 2025
Megh Thakkar, Quentin Fournier, et al.
ACL 2024