The large artificial intelligence companies accept the European regulation approved at the stroke of midnight this past Friday, but claim that it does not represent a brake on their development. As stated by Pilar Manchón, advisor to the advisory committee of the Spanish Government and head of research strategy with artificial intelligence at Google, “AI is too important not to regulate it.” The main companies responsible for these developments have been working in parallel to the negotiation of the standard to ensure an ethical evolution of these tools, so the standard coincides with their general expectations whenever, as they warn Christina Montgomeryvice president and director of Privacy and Trust at IBM, “provide protective barriers for society while promoting innovation.”
Until now, the tendency of technology companies was to leave the limits of their developments in the hands of self-regulation. They all have ethical principles that Manchón summarizes: “Do good things and make sure that it will have a positive impact on the community, on society, on the scientific community. And, if it could potentially do something that is not what you used it for or designed it for, let’s make sure we take all necessary precautions and mitigate the risks. So: do only good, innovate, be bold, but be responsible.”
However, this formula has proven to be absolutely insufficient in aspects such as social networks. According to Global Witnessan NGO that investigates and monitors respect for human rights with 30 years of experience, “these companies prefer to protect their lucrative business model by adequately moderating content and protecting users.”
To prevent these dysfunctions with artificial intelligence, some of the main companies and institutions welcome the standards and propose their own formulas that guarantee compliance with the principles it includes.
In this sense, fifty companies, among which are IBM, Meta, AMD, Intel or Dell; universities, such as Imperial College London, Cornell, Boston, Yale and Harvard; and entities such as NASA or the NSF, have formed the Alliance for Artificial Intelligence (AI Alliance) for the development of AI in compliance with standards: open, safe and responsible.
“Greater collaboration and information sharing will help the community innovate more quickly and inclusively, identifying specific risks and mitigating them before releasing a product to the world,” the signatories state. To do this, its working groups will establish their own standards and “partner” with the initiatives of governments and other organizations. “This is a crucial moment in defining the future of AI,” warns Arvind Krishna, president of IBM. “We can help ensure that the transformative benefits of responsible AI are widely available,” adds Lisa Su, CEO and president of AMD.
We can help ensure the transformative benefits of responsible AI are widely available
Lisa Su, CEO and president of AMD
In this way, the members of the alliance, which do not currently include OpenAI, the developer of ChapGPT, nor Google, which has just presented Gemini (a model with capabilities that surpass those of people), advocate collaboration , between companies and with governments, to follow a common path. As Tom Mihaljevic, president of Cleveland Clinic, one of the most advanced medical institutions in the use of new technologies, explains, “AI capabilities are now constantly growing and improving and it is essential that organizations from various fields come together to help move forward while addressing concerns around safety and security.”
It is also defended by Bob Shorten, director of the Dyson School of Engineering at Imperial College London: “We believe that community participation is essential for AI to be reliable, responsible, transparent and auditable”, the principles defended by the European standard. .
This community includes governments, industries, academic institutions and researchers aligned in ethical development. But, as explained by Manuel R. Torres, professor of Political Science at the Pablo de Olavide University and member of the advisory board of the Elcano Royal Institute. “The problem is the proliferation of a technology that must be prevented from reaching the wrong hand.”
Torres praises the European role as a “regulatory power”, but warns: “The conflict is in how this technology is developed in other areas that do not have any type of scruples or limitations regarding respect for the privacy of the citizens whose data feeds all of this. ”.
He gives the case of China as an example: “Not only is it in that technological race, but it has no problem in massively using the data left by its own citizens to feed and perfect these systems. No matter how scrupulous we want to be with the limits we place on our local developers, in the end, if this does not happen globally, it is also dangerous.”
Wu Zhaohui, China’s vice minister of science and technology, said last November at the UK’s AI security summit that his government is “willing to increase collaboration to help build an international governance framework.”
But legislation, by itself, is insufficient. After the approval of the European standard, the key will be “permanent supervision,” he adds. Cecilia Danesilawyer specialized in AI and digital rights, professor at the Pontifical University of Salamanca and other international universities, disseminator and author of The empire of algorithms (Galerna, 2023).
For Danesi, also a member of the group Women for the Ethics of Artificial Intelligence (Women4Ethical AI) of UNESCO, monitoring is necessary: “These systems are high risk, which can considerably affect human rights or security issues. They have to be evaluated and reviewed to verify that they do not violate rights, that they do not have biases. And it must be done continuously because systems, as they continue to learn, can acquire a bias. And act preventively to avoid damage and generate systems that are ethical and respectful of human rights.”
150 executives from continental companies, such as Airbus, Ubisoft, Renault, Heineken, Dassault, TomTom, Peugeot and Carrefour, were against the regulation of the sector in Europe. Those responsible signed an open letter in June against regulation in the EU, considering that the rule will affect “Europe’s competitiveness and technological sovereignty without effectively addressing the challenges we face and will face.”
#main #artificial #intelligence #companies #accept #European #law #demand #application #represent #brake