XATU: A Fine-grained Instruction-based Benchmark for Explainable Text Updates

Have you ever been surprised by the results of text editing using large language models (LLMs), where the task involved modifying text to better align with your intentions? Perhaps you were looking to improve the clarity of a text only to get results that changed its meaning. Text editing involves fine-tuning various aspects of writing, such as grammar, tone, and clarity, to better convey the intended message. However, traditional benchmarks used to evaluate the quality of text editing approaches often provide broad instructions, resulting in outputs that, while technically correct, may not fully align with the desired changes. This challenge is rooted in the inherent ambiguity of natural language instructions, leaving LLMs guessing about the user’s intention. To address these issues and push the boundaries of text editing with LLMs, we introduce XATU—a new text editing benchmark that incorporates fine-grained instructions and gold-standard edit explanations for explainable text updates.

Introducing XATU

Text editing is an intricate task involving adjustments that span lexical, syntactic, semantic, and knowledge dimensions to align text more closely with specific user intents. Traditional benchmarks often fall short by offering only broad, coarse-grained instructions, leading to outputs that, while technically correct, might miss the mark concerning the intended changes. This is where XATU comes into play.

XATU, for eXplAinable Text Updates, is the first of its kind—a benchmark designed from the ground up to address the nuances of text editing. It does so by providing fine-grained instructions for each text editing task and explanations for the edits, making the whole process more transparent and interpretable.

XATU coarse-grained instruction vs fine-grained instructions for text editing

Figure 1: From coarse-grained to fine-grained edits with XATU

Building XATU: LLM-in-the-loop annotation

The creation of XATU involved a meticulous process that combined the strengths of LLM-based annotation and human-in-the-loop approaches. Starting with selecting high-quality data sources across various NLP tasks, we enriched these data points with fine-grained editing instructions and explanations. The annotation process involved a novel approach, using LLMs to generate candidate annotations, which human annotators then reviewed and refined. This hybrid method ensured the high quality and relevance of the instructions and explanations included in XATU.

What is the key difference of XATU

  • Fine-Grained Instructions: With XATU, we move beyond the one-size-fits-all approach of coarse instructions. The fine-grained instructions in XATU offer detailed guidance on achieving specific editing goals, significantly improving the alignment between user intents and model outputs.
  • Explainable Edits: XATU’s gold-standard edit explanations shed light on the rationale behind each edit. This enhances the interpretability of the edits and provides valuable insights that can help further refine text editing models.
  • A Comprehensive Benchmark: Covering a wide range of editing tasks, XATU serves as a comprehensive benchmark that challenges and evaluates the capabilities of LLMs across different aspects of text editing, from simple grammar checks to complex, knowledge-intensive updates.

Experiments and Insights

To assess the effectiveness of XATU, we conducted extensive experiments with various state-of-the-art LLMs, evaluating their performance in zero-shot settings and after-instruction tuning. The results were illuminating:

  • Superior Performance with Fine-Grained Instructions: LLMs guided by XATU’s fine-grained instructions consistently outperformed those using generic instructions, underlining the importance of detailed guidance in achieving precise editing outcomes.
  • The Power of Explanations: Incorporating explanations from XATU during model fine-tuning led to significant improvements, highlighting the value of understanding the “why” behind edits.
  • More Architectural Insights: Our experiments also revealed how different model architectures respond to various types of text editing tasks, offering valuable pointers for future research and model development.

Towards More Intuitive Text Editing

XATU represents a significant leap forward in making text editing with LLMs more intuitive, precise, and explainable. By providing fine-grained instructions and shedding light on the reasons behind edits, XATU paves the way for a new generation of text editing tools that are more aligned with user intents and interpretability.

The release of the XATU benchmark is just the beginning. We invite researchers and practitioners to explore XATU, experiment with its datasets, and contribute to the ongoing evolution of text editing technologies. With the community’s collective efforts, we envision a future where text editing with LLMs becomes even more powerful, intuitive, and transparent.

Are you ready to explore XATU’s potential further? Check out our GitHub repo and join us in shaping the future of text editing! 

This research was accepted by LREC-COLING 2024 and will be presented by co-author Haopeng Zhang. Read the paper here

Written by Hayate Iso and Megagon Labs


More Blog Posts: