The first cohort of professionals arrived in Cambridge this month to kick-off the Master’s in AI Ethics and Society, and the atmosphere was electric.
There aren’t many places where big tech professionals and competition lawyers can sit around a table with philosophers, historians and civic planners to hash out policy or dream up far-future feminist design fictions over tea and biscuits. Yet this scene marked the first week-long residential of the new Master’s programme in AI Ethics and Society which launched at the University of Cambridge this month. The two-year part-time course is an education-meets-think-tank and incubator environment intended to help forge tomorrow’s leadership in responsible AI.
The 40 students who turned up in Cambridge on a bright September morning had flown in from across the globe. Each day featured a lively combination of activities including lectures (with experts beaming in from other countries), group discussions, brainstorming, and the opportunity to evade AI catastrophe in a research-based role-playing game.
Members of the cohort included CEOs, bankers, authors, healthcare managers, civil servants, and technology-makers, among others, all of whom carved out time from busy schedules to tackle the toughest AI challenges facing their industries.
As CFI Director, Stephen Cave put it with playful brevity in his welcome speech: “We all want to make sure the whole AI thing goes well.”
CFI Director Stephen Cave lectures on the 3 waves of AI Ethics
For the students this will mean spending the next two years interrogating difficult, complex and controversial issues. In fact, when guest speakers arrived to pose provocative questions about democracy, intelligence, warfare, US/China policy and the racialised Whiteness of AI imaginaries, it was clear the course wouldn’t be skipping past the hard bits.
But course advisor, Dorian Peters, found that participants were quick to see the value in the diversity of informed perspectives and in having a safe space to hash it out. “Normally, we function within these professional silos and ideological bubbles, so being able to step outside of all of that -- and in a safe way that’s grounded in both openness and rigor -- can be very liberating."
The programme is designed on the premise that beneficial AI will require disciplinary collaboration. "Addressing the big challenges of AI isn’t something that can be done by a single discipline, but requires input from tech, law, philosophy, history, sociology, and more. We’ve built the course around this, and it’s great to see how quickly interdisciplinary idea-sharing has taken off" explains Henry Shevlin, a philosopher and ethicist who co-leads the course with colleagues, Jonnie Penn and Maya Indira Ganesh.
Students seem to appreciate this multidisciplinary approach. In an evaluation survey, participants praised fellow classmates, instructors and the opportunity to be a part of historic change. One professional designer described the first week as “so inspiring” while a CEO called it: “Everything I'd expected and more.”
Course participants discuss AI governance during breakout session
Perhaps more importantly, learning ran deep. As one student said, “I'd previously viewed AI as a 'big picture' problem, but this past week reminded me that even on a micro scale, AI will have a huge impact.” Another noted: “AI presents itself in far more aspects of life than I had considered (even bearing in mind I think about this a lot)”.
Others found exposure to new areas of research exciting: “The whole study of animal intelligence to understand AI better was something that fascinated me. Is Animal Intelligence like human intelligence or an Alternative Intelligence (AI!)?”
But benefits came from the human connections made as well: “I didn't expect to make friends so quickly” said one designer, while another noted: “I really enjoyed meeting the course leaders and everyone in my cohort: a great bunch of people from very different backgrounds, but with combined and overlapping interests.”
For this inaugural group, the academic journey will span two years of part-time study and include a deep dive dissertation into an AI issue of their choice. But the course is just a first step. The long view is about developing conscious leaders who will play a part in a historic transformation: building a future in which AI is a force for good.
As Course Co-leader Maya Ganesh put it, “We’re not going to tell them what the future of AI will look like – it’s their job to make it. We want to give them access to tools, knowledge and networks of people that will help them get it right.”
Course co-lead Maya Indira Ganesh prepares for a group design activity
. . .
Applications for next year’s intake in 2022 are now open.
For more information, see the programme website.