The LCFI website uses cookies only for anonymised website statistics and for ensuring our security, never for tracking or identifying you individually. To find out more, and to find out how we protect your personal information, please read our privacy policy.

Safety, security and risk

AI poses or intersects with a range of safety and security challenges, some of which are immediate and some of which may only manifest in future as more powerful systems are developed and deployed in a wider range of societal settings. Many of the most transformative impacts of AI, both in terms of potential benefits and risks, remain decades in the future. However, there is work to be done now on these future challenges, by exploring safety challenges in fundamental AI research, and understanding the risks associated with future AI development scenarios. Furthermore, safety norms and practices put in place now support us in being better prepared for the challenges of more capable future systems. AI:FAR explores near-term risks associated with the role of AI in synthetic media, manipulation and information security; defense and military use; and critical processes such as agriculture. It also explores longer-term challenges associated with potential future developments in AI.

This strand includes the Future of Life Institute-funded Paradigms of AGI and their Associated Risks CSER project, which explores safety challenges that may emerge for AI systems with increasing generality and capability.

Recent papers include:
●      Seger, E., Avin, S., Pearson, G., Briers, M., Ó Heigeartaigh, S., Bacon, H. (2020). Tackling threats to informed decision- making in democratic societies. Alan Turing Institute.
●      Hernández-Orallo, J., Martınez-Plumed, F., Avin, S., Whittlestone, J., & Ó hÉigeartaigh, S.S. (2020) AI Paradigms and AI Safety: Mapping Artefacts and Techniques to Safety Issues. ECAI 2020
●      Shackelford, G. E., Kemp, L., Rhodes, C., Sundaram, L., ÓhÉigeartaigh, S. S., Beard, S., ... & Jones, E. M. (2020). Accumulating evidence using crowdsourcing and machine learning: A living bibliography about existential risk and global catastrophic risk. Futures, 116, 102508.
●      Whittlestone, J., & Ovadya, A. (2019). The tension between openness and prudence in responsible AI research. NeurIPS 2019 Joint Workshop on AI for Social Good.
●      Ovadya, A., & Whittlestone, J. (2019). Reducing malicious use of synthetic media research: Considerations and potential release practices for machine learning. arXiv preprint arXiv:1907.11274.
●      Hernández-Orallo, J., Martínez-Plumed, F., Avin, S. and Ó hÉigeartaigh, S.S. (2019). Surveying Safety-relevant AI Characteristics. AAAI 2019.
●      Avin, S. & Amadae, S.M. (2019). Autonomy and machine learning at the interface of nuclear weapons, computers and people. In The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk, SIPRI, (pp. 105-118).
●      Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., ... & Anderson, H. (2018). The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. arXiv preprint arXiv:1802.07228.
 
Policy submissions and reports:
●      Avin, S., Sundaram, L., Whittlestone, J., Maas, M.M., Hobson, T. (2021). Submission of Evidence to The House of Lords Select Committee on Risk Assessment and Risk Planning.
 ●      Belfield H., Hernandez-Orallo, J., O hEigeartaigh, S., Maas, M., Hagerty, A., Whittlestone, J. (2020). Response to the European Commission’s Whitepaper on AI.
●      Belfield, H., Jayanti, A., and Avin, S. (2020). Defence Industrial Policy: Procurement and prosperity. Written Evidence to the UK Parliament Defence Committee's Inquiry on Defence Industrial Policy.
 ●      Whittlestone, J. Vold, K. and Alexandrova, A. (2019). The potential harms of online targeting. Evidence submitted to the UK Centre for Data Ethics and Innovation
 ●      Nyrup, R., Whittlestone, J. and Cave, S. (2019). Why Value Judgements Should Not Be Automated. Evidence submitted to the Committee on Standards in Public Life
 ●      Kemp, L., Cihon, P., Maas, M., Belfield, H., Ó hÉigeartaigh, S., Leung, J. & Cremer, C.Z. UN High-level Panel on Digital Cooperation: A Proposal for International AI Governance  (2019).
Chosen as one of 6 (out of 108) submissions to be presented at a UN town hall meeting on AI governance; AI:FAR’s Haydn Belfield presented.
 ●      Belfield, H. & Avin, S. Response to the European Commission’s High-Level Expert Group on Artificial Intelligence request for evidence. (2019). 
Provided recommendations (based on previous research) on technical and governance measures for trustworthy and robust AI based on the Malicious Use of AI report.
 ●     Response to the US National Institute of Standards and Technology request for information on standards for artificial intelligence (2019) (Ó hÉigeartaigh & Belfield, with colleagues from collaborating institutions)

Researchers:
Shahar Avin (CSER), Sean O hEigeartaigh, John Burden (CSER), Jose Hernandez-Orallo, Haydn Belfield, Jess Whittlestone
 
 

Next project

RECOG-AI