Towards Multi-dimensional Evaluation of LLM Summarization across Domains and Languages

Hyangsuk Min
Hyangsuk Min
Equal contribution
,
Yuho lee
Equal contribution
,
Minjeong ban
,
Jiaqi deng
,
Nicole hee yeon kim
,
Taewon yun
,
Jason cai
,
Hang su
,
Hwanjun song
· 0 min read
Overview of MSumBench, featuring multi-domain documents in both English and Chinese, with domainspecific key-facts. Model summaries are evaluated via a multi-agent debate framework, aiding annotators’ assessments. Each summary then receives percentage scores for faithfulness, completeness, and conciseness.
Abstract
Evaluation frameworks for text summarization have evolved in terms of both domain coverage and metrics. However, existing benchmarks still lack domain-specific assessment criteria, remain predominantly English-centric, and face challenges with human annotation due to the complexity of reasoning. To address these, we introduce MSumBench, which provides a multi-dimensional, multi-domain evaluation of summarization in English and Chinese. It also incorporates specialized assessment criteria for each domain and leverages a multi-agent debate system to enhance annotation quality. By evaluating eight modern summarization models, we discover distinct performance patterns across domains and languages. We further examine large language models as summary evaluators, analyzing the correlation between their evaluation and summarization capabilities, and uncovering systematic bias in their assessment of self-generated summaries. Our benchmark dataset is publicly available at https://github.com/DISL-Lab/MSumBench.
Type
Publication
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Hyangsuk Min
Authors
Hyangsuk Min (she/her)
PhD Student
Hyangsuk Min is a PhD Student at KAIST. She is passionate about building human-aligned and trustworthy long-context summarization and memory systems for large language models.
Authors
Authors
Authors
Authors
Authors