The rapid increase in textual information means we need more efficient methods to
sift through, organize, and understand it all. While retrieval-augmented generation
(RAG) models excel in accessing information from large document collections,
they struggle with complex tasks that require aggregation and reasoning over information spanning across multiple documents–what we call holistic reasoning.
Long-context language models (LCLMs) have great potential for managing largescale documents, but their holistic reasoning capabilities remain unclear. In this
work, we introduce HoloBench, a novel framework that brings database reasoning operations into text-based contexts, making it easier to systematically evaluate
how LCLMs handle holistic reasoning across large documents. Our approach
adjusts key factors such as context length, information density, distribution of information, and query complexity to evaluate LCLMs comprehensively.
Our experiments show that the amount of information in the context has a bigger influence on LCLM performance than the actual context length. Furthermore,
the complexity of queries affects performance more than the amount of information, particularly for different types of queries. Interestingly, queries that involve finding maximum or minimum values are easier for LCLMs and are less affected by context length, even though they pose challenges for RAG systems. However, tasks requiring the aggregation of multiple pieces of information show a noticeable drop in accuracy as context length increases. Additionally, we find that while grouping relevant information generally improves performance, the optimal positioning varies across models. Our findings surface both the advancements and the ongoing challenges in achieving a holistic understanding of long contexts. These can guide future developments in LCLMs and set the stage for creating more robust language models for real-world applications.