The LCFI website uses cookies only for anonymised website statistics and for ensuring our security, never for tracking or identifying you individually. To find out more, and to find out how we protect your personal information, please read our privacy policy.

The Scientometrics of AI Benchmarks: Unveiling the Underlying Mechanics of AI Research

Workshop Paper by Pablo Barredo, José Hernández-Orallo, Fernando Martínez-Plumed, Seán Ó hÉigeartaigh

The Scientometrics of AI Benchmarks: Unveiling the Underlying Mechanics of AI Research 1st International Workshop on Evaluating Progress in Artificial Intelligence (EPAI 2020) @ ECAI 2020, Santiago de Compostela, Spain, September 4, 2020.

Abstract: The widespread use of experimental benchmarks in AI research has created new competition and collaboration dynamics that are still poorly understood. In this paper we provide an innovative methodology to explore this dynamics and analyse the way different entrants in these competitions, from academia to tech giants, behave and react depending on their own or others’ achievements. We perform an analysis of over twenty popular benchmarks in AI, linking their underlying research papers. We identify links between researchers and institutions (i.e., communities) beyond the standard co-authorship relations, and we explore a series of hypotheses about their behaviour as well as some aggregated results in terms of activity, breakthroughs and efficiency. As a result, we detect and characterise the dynamics of research communities at different levels of abstraction, including organisation, affiliation, trajectories, results and activity.

Download Workshop Paper