The time has come when artificial intelligence makes decisions that affect our lives: yes we are chosen for a jobIf we are granted a mortgage, if they accept us at a university, if we can cross a border, if the police must watch us before we commit a crime, the price of a health policy. The algorithm becomes a judge that does not allow for appeal. Why won’t my credit card limit be raised? Because the computer prevents me, the bank clerk will say, still in the flesh. And can I know why the computer prevents it? Don’t try to figure out how it works, it will be the answer.
A great debate around artificial intelligence is whether it is free of prejudices – it would be expected, since a program must respond to objective data -; or if, on the contrary, machines reproduce our hobbies, because they learn from us that we have them. The conclusion of those who have studied it is that, even when they learn alone, AI assumes the bias of the society that analyzesn. Historical biases sex, class, ethnic, xenophobic.
It is not speculation: it is studied. As in the report Gender Shades, which in 2018 found that facial recognition programs failed more with women and ethnic minorities. IBM’s Watson program was wrong up to 35% of the time for black women, but only 1% for white men. The company created a team to correct Watson; it reduced errors, but has not been able to eradicate them.
A more delicate matter: artificial intelligence applied to defense or to public order. In wars, drones are already used that not only target a target, but choose it. The police are already beginning to profile suspects using AI: trying to anticipate crimes that no one has committed yet.
We can improve the programs, free them from our deep-rooted misgivings towards what is different, but there is a basic problem: what data is used for the machine to learn to make decisions. If it’s the historical crime data in the US, for example, we dragged centuries of racist bias there, and not just there. If you are the data of economic solvency anywhere, you will see the effect of centuries of patriarchy.
Catherine D’Ignazio, Professor of Science at MIT and author of the book Data Feminism, it is clear to him. “The data will never be neutral because it is never ‘raw’ data. They are produced by human beings who come from certain places, have their own identities, their particular histories and who work in specific institutions ”.
The fundamental question is whether artificial intelligence has to respond to how we are or, better, how we want to be.
Other articles by the author
.