Paradigms of AI Evaluation: Mapping Goals, Methodologies and Culture

1Leverhulme Centre for the Future of Intelligence, University of Cambridge
2VRAIN, Universitat Politècnica de València
*Indicates Equal Contribution

We survey 125+ representative studies and identify six main paradigms of AI evaluation, defined by specific objectives, methodologies, and assumptions. To operationalise this framework, we annotate these studies along key dimensions, delineating the unique questions and approaches each paradigm tackles. This gives researchers appreciation of the extent of AI evaluation and allows them to build bridges between insular research trajectories and identify gaps.

Abstract

Research in AI evaluation has grown increasingly complex and multidisciplinary, attracting researchers with diverse backgrounds and objectives. As a result, divergent evaluation paradigms have emerged, often developing in isolation, adopting conflicting terminologies, and overlooking each other's contributions. This fragmentation has led to insular research trajectories and communication barriers both among different paradigms and with the general public, contributing to unmet expectations for deployed AI systems. To help bridge this insularity, in this paper we survey recent work in the AI evaluation landscape and identify six main paradigms. We characterise major recent contributions within each paradigm across key dimensions related to their goals, methodologies and research cultures. By clarifying the unique combination of questions and approaches associated with each paradigm, we aim to increase awareness of the breadth of current evaluation approaches and foster cross-pollination between different paradigms. We also identify potential gaps in the field to inspire future research directions.

Annotated paper collection

Feature visualization

The graph shows a UMAP projection of the Jaccard distance matrix, generated using the dimensions from our framework. Each point represents a surveyed paper, whose colour indicates the paradigms it belongs to. We find clusters of papers corresponding to different paradigms.

BibTeX

@misc{burden2025paradigmsaievaluationmapping,
      title={Paradigms of {AI} Evaluation: Mapping Goals, Methodologies and Culture},
      author={John Burden and Marko Tešić and Lorenzo Pacchiardi and José Hernández-Orallo},
      year={2025},
      eprint={2502.15620},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2502.15620},
}
\