Talk for a few minutes with Carissa Velizassociate professor at the Institute for the Ethics of Artificial Intelligence and fellow at Hertford College, University of Oxford, is a deeply educational yet serene experience. Author of several reference works on surveillance and privacynow teaches companies and governments how consider the purpose of using AI and the dangers of ignoring its riskseven for the democratic system itself.
“In the end, Democracy depends on having reliable information and citizens being relatively well informed“Véliz explains to Public“and if you have a machine whose specialty is inventing stories that are plausible and credible, but not trueand facilitates the generation of this content at scale, since obviously there is a misinformation problem“.
This renowned expert, recent star participant in the Leaders with Purpose Forum in Madrid, demands a serious approach to the ethical problems that these technologies can generate because “there are different types of artificial intelligence, different companies, different uses, and there are those who take artificial intelligence seriously and there are those who do not; we need everyone to take this seriously“she says. In fact, she herself teaches a course on AI ethics at the Rafael del Pino Foundation focused on the business world.
“Europe once again leads the way in regulation [sobre inteligencia artificial]but I think that the laws have to be minimalist,” says Véliz, whose approach to the legislative issue has to do with protection of the citizens’ rightsbeyond the cost-benefit analysis.
“It is normal for technology to go ahead of the laws, and I think it is a good approach that the legislation focuses on the ends of its use more than on the technology in particular, because technology changes all the time“, he comments, although he adds: “It is true that there is a risk that we are not providing correct advice.”
Because although we have basic but clear regulations about what limits are necessary in the deployment of artificial intelligence, some requirements, such as risk assessment, are nothing short of impossible to fulfill since foundational AIs are naturally opaque and almost no one knows how they really work.
Estimate the risks
For Véliz, “it is very difficult to have a real estimate of the risks when there are technologies with which we have not interacted before, There is no data to adequately calculate that risk.“, although he maintains that analyzing the purposes of the toolalthough it is a multipurpose foundational AI, is a good starting point.
“In the end, AI is a tool, sophisticated, but a tool; therefore, the human beings behind They are responsible for that tool.” of those AI engines.
“Technology companies have ignored the law and designed a system illegally”
The expert recalls, to begin with, that “technology companies have ignored the law and designed a system illegally, that is, taking copyrighted datapersonal data, everything that they have been able to sweep up on the Internet”, so far “it seems that we have accepted that fact”, and “now we have the contradiction that we are using those tools that do not even comply with copyright rules or data protection regulations, but those rules are still in force.
About the recent Nobel Prize in Physics George HintonVéliz tells us a story: “George Hinton recently went to Oxford and I had the opportunity to speak with him; during the talk he explained what we are doing with artificial intelligencesomething that has become very dangerous and is even designed in a way that it really puts democracy and even science at risk. I asked him: Well, if it really poses an even existential risk, if you were to live again, what would you do differently? And he told me that I wouldn’t change anything, I would do everything the same“.
Even with everything, Véliz is optimistic because she is sure that “there are ways to design technology so that we understand it better, so that it is more responsible.” And the real importance of any new technology must be calibrated. “In the beginning there was a lot hype in companies but The reality is that many of them have not found sufficient improvements in productivity as they expected.”
Saving or loss of time
If the AI ”invents something very complicated to verify, is lost so much time editing and correcting errors that it seems that the work is done by one“, comments this expert, who points out that “at the moment people mainly use AI tools out of curiosity, for fun, to write something e-mail unimportant; It remains to be seen which uses will be really important.“.
The adoption of tools with generative AI is spreading in many fields and can save time when generating a document or editing an image. However, the speed of its development is creating a new generation who not only uses it carelessly to write papers in the educational system, for example, but also You are getting out of the habit of verifying whether the information that the system returns is true or false.
Véliz has just finished a chapter of his new book Privacy is power (Debate, 2024) in which he proposes several cases of certain losses of skills that can cost us dearly. One of them refers to the power plants, “that, when they are hackedthey have to work manually, analogically, and new people may not have that experience, so They have to call veteransalready retired, to help.” “But the veterans who analogically control these systems They are not going to be there forever, so when they are missing, what are we going to do?“he asks.
The ethics of good design
During her talk at the Leaders with Future event, Carissa Véliz used, before a large group of businessmen and experts, several cases that demonstrate the extent to which the use of artificial intelligence must be approached with care. ““They are not trained to think nor, therefore, to know their own limits of knowledge.”said. For her “they are generators of statistical responses that They do not seek the truth but plausibility, verisimilitude“That is, so that we believe it.
Therefore, ethics is fundamental in the design of these powerful tools, and The issue of personal privacy versus AI is nuclear. “Do not ask an AI to provide you with your own personal data, those that it has used during its training, because he’s not going to give them to yousince their way of acting is based on those same data,” he remarked when discussing the issue of individual privacy and artificial intelligence, the latter eager for data to be able to function.
“Without privacy there is no true democracy,” said Véliz, who concluded that “the question would be How much surveillance can a democracy endure?“.
#Carissa #Véliz #philosopher #Oxford #designed #invent #credible #content #search #truth