The Margaret Boden Lecture 2018: The Future of Intelligence: What matters? Speaker: Professor Margaret A. Boden (University of Sussex).
Inaugural Lecture of the Margaret Boden Lecture series at the Leverhulme Centre for the Future of Intelligence
Date: 5 June 2018
Venue: Cambridge Union Society (part of the Varieties of Mind Conference)
A fundamental difference between AI systems and human beings is that computers can’t care. Human intelligence enables us to achieve our goals and satisfy our needs. Our (widely shared) needs are the basis of our caring, and our (idiosyncratic) goals are accepted only in relation to our needs. Computers, however, have no needs—and, therefore, no real goals either. Decisions about things that matter require human judgment, even if AI’s instrumental reasoning is involved also. The caring/non-caring difference has two implications for the AI of the future. First, true “collaboration” between people and AI systems is not possible—so working alongside AI systems will not contribute to human wellbeing as normal employment does. (Employment provides many benefits besides money to put food on the table.) Second, AI systems will be largely inappropriate in contexts where person-to-person relationships should be crucial. Decisions about just which contexts those are vary across cultures. AI designers and users, and policy-makers too, need to bear that fact in mind if future AI is to be socially acceptable. Calls for designing AI with “human wellbeing” in mind normally consider issues such as privacy, human rights, general ethics, and (sometimes) the societal “capabilities” defined by development economics. All these are important. But the psychological needs identified by personality theorists (such as Abraham Maslow) are important too.