Due to the Coronavirus pandemic this event has been postponed. Rescheduled dates will be published in due course.
The growing use of artificial intelligence in healthcare – e.g. for medical diagnosis, health monitoring, and treatment recommendation – prompts a series of ethical questions about the appropriate regulation and application of these technologies. Some of the most important challenges arise from the use of sophisticated ‘black box’ algorithms with limited interpretability: for instance, what are clinicians morally required to disclose or explain to patients about algorithmic treatment recommendations and diagnoses by such systems? And what kind of understanding of AI systems do different stakeholders need (e.g. physicians, designers or policymakers) to integrate them responsibly into healthcare practice?
This workshop brings together leading researchers from philosophy, law, computer science and healthcare, to discuss the current status of AI in healthcare and what kinds of transparency or explanations (if any) are needed for these.
The workshop is part of Rune Nyrup’s project Understanding Medical Black Boxes, funded by the Wellcome Trust. It is the second instalment of the workshop series organised in collaborations with research projects on issues in explainable AI at the University of Saarland, the Technical University of Dortmund and Delft University of Technology.
Monday 23 March
13.00-13.20: Arrival and registration
13.30-14.40: The promises of XAI: Understanding, Explanations, and Discovery: Lena Kästner and Timo Speith (University of Saarland)
14.40-15.50: What Difference Does It Make? “Black Box” Medicine and the Law: Jeff Skopek and Jennifer Anderson (University of Cambridge)
16.20-17.30: Professional obligations and the ethics of explainability: Nancy Walton (Ryerson University)
Tuesday 24 March
10.00-11.10: Objectually Understanding Informed Consent: Daniel Wilkenfeld (University of Pittsburgh School of Nursing)
11.10-12.20: Hypervisable or invisible? Frontiers and challenges of AI and older adults: Charlene Chu (University of Toronto)
13.30-14.40: TBC: Alastair Denniston (University of Birmingham)
14.40-15.50: Explainable AI and the Epistemic Value of Expert Explanations: Elizabeth Seger (University of Cambridge)
16.20-17.30: Personalised Explanations for Algorithmic Diagnoses: Pragmatic Feature Learning in Healthcare: David Watson (Oxford Internet Institute)
19.30: Dinner for Speakers (by invitation only). Old Library, Sidney Sussex College
Wednesday 25 March
10.00-11.10: Who is afraid of the black-box? Explanation, transparency, and computational reliabilism: Juan M. Durán (Delft University of Technology)
11.10-12.20: Opportunities and Challenges for AI in Healthcare: From Translational Discovery Science to Personalised Medicine: Kourosh Saeb-Parsy (University of Cambridge)
13.30-14.40: How should clinicians use machine learning models? Abhishek Mishra (Uhiro Centre for Practical Ethics, University of Oxford)
14.40-15.50: Trusting Humans and Trusting Machines: Kevin Baum, Markus Langer and Sarah Sterz (Saarland University)