Artificial intelligence (AI) promises to radically transform people's reality. Its development and democratization will increase wealth, of course. But if used properly, it can also improve healthcare, reduce working hours, and even make a difference in the fight against climate change. At least, that is what numerous companies dedicated to its development and great internet gurus hope, as is the case of Microsoft co-founder Bill Gates or Sam Altman, executive director of OpenAI and the main culprit of the flood of generative tools that have arrived. to the network in recent months. All thanks to the resounding success of the ChatGPT conversational robot, which has accelerated the development of technology to unsuspected points.
Despite all the good that it is expected to bring, the use and development of this technology can also represent risks in the application of it by people or organizations, according to Richard Benjamins, head of AI at Telefónica. On the one hand, he highlights the possible use of artificial intelligence by cybercrime to deceive society by generating fake audios and videos.
Likewise, another risk is the negative, unintended effects that can be caused by the biases that the models may have based on their training with data, which can end up violating the rights of people based on minorities, race, creed. or the gender, among others, that has been expressed can influence, for example, the allocation of financial credits or the prediction of crime in specific neighborhoods. The third risk is that of applying AI only for profit and not promoting the development of AI from an ethical point of view.
«Applying AI only for commercial purposes and not to solve the big problems of our time such as climate change, pandemics, natural disasters, poverty, and inequality, is a danger. Currently, investment in commercial AI is enormous, while investment in social or environmental issues is anecdotal and supported by philanthropy,” says the executive of the Spanish telecom company.
To minimize all these dangers, both companies, scientists and academics have been demanding that states regulate the technology for months. “If this goes wrong, it could go very wrong,” Sam Altman warned last spring during a hearing before the United States Senate. The executive behind the development of ChatGPT has even gone so far as to point out that the development of AI may end up leading to the arrival of a new algorithm that is capable of surpassing any human being in knowledge, and that this could end up becoming a danger to the survival of humanity. However, at the moment, it is not at all clear that this threat could become real.
«The fact that artificial intelligence advances does not imply that it can, for example, develop the tendency to dominate the world. We humans are the ones who create it and guide it. It is a tool,” says Juan Bernabé-Moreno, director of IBM Research in Europe. The researcher also does not believe that, for the moment, the creation of an AI of these characteristics is possible: «We are not at that point. Some glimmers of reasoning can be seen in the data, but it still remains an area of research. “I think there is still time.” In his opinion, we must worry about controlling real dangers before hypothetical ones and preparing society, which will be essential so that technology does not end up becoming a problem. Especially in the work world.
Explain the decisions
The objective of the Government of Spain is that, by 2025, 25% of national companies use AI and Big Data in their businesses. This will allow work to be lightened and companies to be more productive. However, it is essential that the entire training process be done transparently. If, for example, a machine decides whether a person has access to a certain job, or a promotion, he must know well how the tool that makes that decision works. And that is only possible with transparency.
«When a business uses AI, it must explain why and what the benefits of that system are. Also the principles on which it is based when using it, such as the search for equity. And, obviously, you have to explain how you make decisions, so that it is clear that it does not benefit anyone,” says Juan Ignacio Rouyet, professor of AI at the International University of La Rioja. Rouyet recognizes that AI is going to transform the labor market.
According to Goldman Sachs, some 300 million jobs will be affected. Other studies suggest that the development of this type of tools will cause the emergence of new jobs. “Jobs will be destroyed and created, there will be a little of both,” he says. Whether the commitment to technology does not become a problem will depend a lot on the commitment that companies, and especially the administration, make to “train citizens.”
To minimize the risks of using technology, regulation is indeed essential. A few days ago, the EU managed to advance an agreement in principle for the approval of its AI Law, which is expected to come into force within two years. The regulations will control the uses of technology and force development companies to be more transparent about how the technology works. It also limits the use of facial recognition technology in public spaces and directly prohibits all tools that may negatively and premeditatedly influence the user.
#applied #commercial #purposes