The LCFI website uses cookies only for anonymised website statistics and for ensuring our security, never for tracking or identifying you individually. To find out more, and to find out how we protect your personal information, please read our privacy policy.

Futures and foresight

AI is progressing rapidly, both in terms of fundamental research and application to scientific and societal challenges. Thresholds are being passed, for example, in the accuracy of voice recognition and machine translation, leading to these technologies moving from being novelties to wide-scale, everyday use, with broad economic ramifications. Future possibilities may become pressing challenges quite quickly.  Furthermore, norms and governance established now may have long-lasting influences on the development of the field. There is a need for work focusing on better characterising trajectories for the development and application of AI; as well as for analysis of these future possibilities, incorporating perspectives from across science and society.
Research within this theme focuses on:
●      Technology forecasting: Exploring the implications of different avenues of AI progress
●      Trends analysis: Identifying and exploring broader technological and social trends likely to influence AI impacts and risks.
●      Impact assessment and participatory foresight: On-the-ground engagement with affected communities to understand actual and anticipated impacts.
●      Methodological innovation: Developing foresight and scenario analysis methodologies to improve our ability to do research on the above
●      Conceptual analysis: Clarifying conceptual frameworks to improve our ability to think clearly and communicate about impacts and risks.
Projects include:
●      Intelligence Rising: A scenario tool to explore the societal and geopolitical impacts of AI, focusing on a range of technological developments and the interplay of different actors (technology, governance, civil society)
●      Mid-term Impacts of AI: exploring the possible impacts on society of advances in AI falling short of human-level intelligence.
Recent papers include:
●      Cremer, C.Z., and Whittlestone, J. (2021). Artificial Canaries: Early Warning Signs for Anticipatory and Democratic Governance of AI. International Journal Of Interactive Multimedia And Artificial Intelligence, 6 (Special Issue on Artificial Intelligence, Paving the Way to the Future), 100-109.
e International Journal of Interactive Media and Artificial Intelligence.
●      Whittlestone, J., Arulkumaran, K., and Crosby, M. (Forthcoming, 2021) The Societal Impacts of Deep Reinforcement Learning. Forthcoming in the Journal of Artificial Intelligence Research (JAIR).
●      Barredo, P., Hernandez-Orallo, J., Martinez-Plumed, F. & O hEigeartaigh, S.S. (2020) The Scientometrics of AI Benchmarks: Unveiling the Underlying Mechanics of AI Research. Evaluating Progress in AI Workshop, ECAI 2020
●      Rich, A.S., Rudin, C., Jacoby, D.M.P., Ó hÉigeartaigh, S.S., Cave, S., Dihal, K. et al. (2020) AI reflections in 2019.  Nature Machine Intelligence 2, 2–9
●      Avin, S., Gruetzemacher, R., & Fox, J. (2020). Exploring AI Futures Through Role Play. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (pp. 8-14).
●      Prunkl, C., & Whittlestone, J. (2020). Beyond Near-and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society(pp. 138-143).
●      Avin, S. (2019). Exploring artificial intelligence futures. Journal of AI Humanities.
●      Martinez-Plumed, F., Avin, S., Brundage, M., Dafoe, A., hÉigeartaigh, S. Ó., & Hernández-Orallo, J. (2018). Accounting for the neglected dimensions of AI progress. arXiv preprint arXiv:1806.00610.

Sean O hEigeartaigh, Jess Whittlestone, Shahar Avin (CSER), Alexa Hagerty, Jose Hernandez-Orallo, Asaf Tzachor (CSER).

Next project

Governance, ethics and responsible innovation