The LCFI website uses cookies only for anonymised website statistics and for ensuring our security, never for tracking or identifying you individually. To find out more, and to find out how we protect your personal information, please read our privacy policy.

The tension between openness and prudence in responsible AI research

Conference Paper by Jess Whittlestone, Aviv Ovadya

The tension between openness and prudence in responsible AI research. AI for Social Good workshop at NeurIPS (2019), Vancouver, Canada. https://arxiv.org/abs/1910.01170v2

This paper explores the tension between openness and prudence in AI research, evident in two core principles of the Montréal Declaration for Responsible AI. While the AI community has strong norms around open sharing of research, concerns about the potential harms arising from misuse of research are growing, prompting some to consider whether the field of AI needs to reconsider publication norms. We discuss how different beliefs and values can lead to differing perspectives on how the AI community should manage this tension, and explore implications for what responsible publication norms in AI research might look like in practice.

Download Conference Paper