Research has shown that media representations of AI can perpetuate negative stereotypes about the creators and users of new technologies and misdirect attention from the harms implicit in the real-life applications of AI. The Leverhulme Centre for the Future of Intelligence has just released a new ‘AI and Responsible Journalism’ toolkit to empower journalists to communicate the risks and benefits of AI more responsibly: to avoid perpetuating problematic AI narratives, to foster inclusivity and diversity in discussions about AI, and to promote critical AI literacy.
The resource is a culmination of insights from a collaborative research workshop held in 2023, convened by Dr Tomasz Hollanek, Dr Eleanor Drage, and Dr Dorian Peters at the University of Cambridge. While there already are lists of principles and recommendations for AI-focused journalism, the workshop participants highlighted the need for a user-friendly ‘hub’ gathering tools and resources that help communicators gain a deeper understanding of AI's technical and socio-technical dimensions, connect them with a diverse array of AI and AI ethics experts, and provide them with information about support networks and funding opportunities.
This Toolkit is meant to fulfil this need comprehensively. It is a dynamic and evolving repository that will continuously receive updates, guided by the input from the community it seeks to empower. Check out the Toolkit, share with your colleagues, and get in touch with any feedback, suggestions, or if you’d like to get involved in our future journalism-focused projects.