Can you teach a computer not to lie unless the Gestapo is at the door? And how does a machine know that it is not about ‘ordinary’ agents, who in democratic societies are called your best friend?
Well, that should still be possible, says artificial intelligence professor Jan Broersen. It’s actually a pretty simple example of an exception that goes above the general rule. “You already have formal reasoning systems that can deal with rules that you also have to be able to break in certain situations. There are much more complicated examples, whole systems of rules that interact. But in theory all of that can be programmed in.”
The extent to which this makes ‘moral’ computers possible, automated systems that learn to make their own choices – and to bear responsibility – is the focus of Broersen’s research, who gave his inaugural lecture at Utrecht University at the end of March. Trained as a mathematician in Delft, he wants to use logic to try to develop a moral calculus for AI systems.
Such ‘deonic logic’ (from the Greek deon: that which is appropriate or mandatory) is badly needed, he believes, for a society that makes increasing use of artificial intelligence. “Everyone is talking about AI and ethics, but hardly anyone is doing anything about it. At least, it is usually approached as a social or legal problem. Do we really want self-driving cars on the road? How do we fit that in legally? When are manufacturers liable?”
All very important, says Broersen, but why not look at whether moral responsibility can be programmed into AI systems themselves? “I want to know: how do you operationalize existing theories about ethics and put them in a machine so that it knows how to deal with situations in which moral considerations are important.”
Suppose you can save those ten people by pushing one other onto the rails
Just explain, how do you put ethics into machines?
“Of course it starts with very different moral theories. Philosophers have been working on this endlessly. Some theories are easier to imagine automating than others. Take utilitarianism, roughly speaking, the view that you should do what produces the greatest benefit or happiness for as many people as possible. That is a fairly quantitative approach that you can easily capture in a program. It fits in with how computer scientists already think about intelligence: as a procedure for choosing actions from a series of options with a clear goal.”
Then the ‘trolley problem’ does arise. Do you have to let a runaway trolley race at ten people to make sure they are killed, or do you have to flip the switch to another track where only one person is killed?
“Yes, there are many variants. You can see that for many people utilitarianism is an unsatisfactory moral theory. Is morality really such a simple calculation: ten victims or one? People also make a difference between doing nothing and actively intervening, such as pulling the switch. This has consequences for how we perceive responsibility. And suppose you can save those ten people by pushing one other person onto the rails. That’s something else. We sense that intuitively – and no, a computer does not.”
Rules give direction, but you also have to be able to break them depending on the context
What is possible then?
“The moral calculus I advocate does not primarily think in terms of desired outcomes – such as: the least number of victims – but in following rules. Moral behavior is rule-guided behaviour. Which rules apply in a situation and which rule should take precedence over the others? We also teach children how to behave. Rules provide direction, but you also have to be able to break them depending on the context. It must be possible to operationalize this in a formal system.”
Yet in your inaugural lecture you are skeptical about the possibility of strong AI, machines to teach ‘real’ human intelligence. Why? You cite Wittgenstein as support. But it also says: we follow rules blindly, without thinking about it. A computer can do that too, right?
“I think something is still missing, namely the moral source. With us it is reason and community, both make us human. We learn and test our moral insights and intuitions against each other, we interpret rules, we nuance them. Machines don’t have that. You can program a lot, but not such moral intuitions.”
Can’t they develop it themselves while learning?
“No, in my view such a system will always miss something. We remain the ones who determine how a rule should be interpreted. A machine ultimately does nothing but follow instructions that we have put into it. Incidentally, this applies to the machines we use now. It cannot be ruled out that it will one day be possible. take quantum computing, automation with insights from quantum mechanics. If you start to understand intelligence and moral choices in a quantum way, it will be a different story. But that is very speculative. That area is open, we still understand very little of it. I am personally a non-determinist, I think that reality is not completely fixed by law. But I do believe that in the end we are machines. Just not the kind of machines we now call computers.”
That car won’t understand what to do when fellow road users honk the horn
More practical: how can ethical logic help the tax authorities to prevent a new allowance scandal?
“I don’t think that has much to do with AI. There are simply statistical connections made in a way that we do not find desirable. You enter cases, ‘yes/no fraud’, and then such a computer starts learning. It then looks for correlations between characteristics of people or files, and you can no longer turn that off. You could also do it differently and draw up rules in advance about how a computer may search. Then you can program them in such a way that they weigh certain characteristics or do not include them. As it is now, you don’t know how the system learns – and you can’t correct it either. Yes, afterwards in the House of Representatives.”
The self-driving car has eyes, a memory, can make choices. Isn’t that a strong AI?
“No, I don’t think so. That car can’t have emotions or see meaning like we do, not with our current computers. He won’t understand what to do when fellow road users begin to respond to a sign saying ‘honk if you’re happy‘ on it, as you can see in America. You can of course program something in it, but the behavior will still be different. No tailgating, yes you can teach him that.”
#missing