The LCFI website uses cookies only for anonymised website statistics and for ensuring our security, never for tracking or identifying you individually. To find out more, and to find out how we protect your personal information, please read our privacy policy.

POSTPONED: Issues in Explainable AI 2: Understanding and Explaining in Healthcare

23 March 2020 - 25 March 2020

Due to the Coronavirus pandemic this event has been postponed. Rescheduled dates will be published in due course.

The growing use of artificial intelligence in healthcare – e.g. for medical diagnosis, health monitoring, and treatment recommendation – prompts a series of ethical questions about the appropriate regulation and application of these technologies. Some of the most important challenges arise from the use of sophisticated ‘black box’ algorithms with limited interpretability: for instance, what are clinicians morally required to disclose or explain to patients about algorithmic treatment recommendations and diagnoses by such systems? And what kind of understanding of AI systems do different stakeholders need (e.g. physicians, designers or policymakers) to integrate them responsibly into healthcare practice?

This workshop brings together leading researchers from philosophy, law, computer science and healthcare, to discuss the current status of AI in healthcare and what kinds of transparency or explanations (if any) are needed for these.

The workshop is part of Rune Nyrup’s project Understanding Medical Black Boxes, funded by the Wellcome Trust. It is the second instalment of the workshop series organised in collaborations with research projects on issues in explainable AI at the University of Saarland, the Technical University of Dortmund and Delft University of Technology.

Monday 23 March
13.00-13.20: Arrival and registration 
13.20-13.30: Welcome 
13.30-14.40: The promises of XAI: Understanding, Explanations, and Discovery: Lena Kästner and Timo Speith (University of Saarland) 
14.40-15.50: What Difference Does It Make? “Black Box” Medicine and the Law: Jeff Skopek and Jennifer Anderson (University of Cambridge) 
15.50-16.20: Coffee 
16.20-17.30: Professional obligations and the ethics of explainability: Nancy Walton (Ryerson University)

Tuesday 24 March
09.30-10.00: Coffee
10.00-11.10: Objectually Understanding Informed Consent: Daniel Wilkenfeld (University of Pittsburgh School of Nursing) 
11.10-12.20: Hypervisable or invisible? Frontiers and challenges of AI and older adults: Charlene Chu (University of Toronto) 
12.20-13.30: Lunch 
13.30-14.40: TBC: Alastair Denniston (University of Birmingham) 
14.40-15.50: Explainable AI and the Epistemic Value of Expert Explanations: Elizabeth Seger (University of Cambridge) 
15.50-16.20: Coffee 
16.20-17.30: Personalised Explanations for Algorithmic Diagnoses: Pragmatic Feature Learning in Healthcare: David Watson (Oxford Internet Institute)

19.30: Dinner for Speakers (by invitation only). Old Library, Sidney Sussex College

Wednesday 25 March 
09.30-10.00: Coffee 
10.00-11.10: Who is afraid of the black-box? Explanation, transparency, and computational reliabilism: Juan M. Durán (Delft University of Technology) 
11.10-12.20: Opportunities and Challenges for AI in Healthcare: From Translational Discovery Science to Personalised Medicine: Kourosh Saeb-Parsy (University of Cambridge) 
12.20-13.30: Lunch 
13.30-14.40: How should clinicians use machine learning models? Abhishek Mishra (Uhiro Centre for Practical Ethics, University of Oxford) 
14.40-15.50: Trusting Humans and Trusting Machines: Kevin Baum, Markus Langer and Sarah Sterz (Saarland University) 
15.50-16.20: Coffee 
16.20-17.30: Discussion 

This event is generously sponsored by the Society for Applied Philosophy and the Wellcome Trust [213660/Z/18/Z]. We gratefully acknowledge their support.