Europe has taken a step forward with the approval of the world's first regulation to regulate artificial intelligence. This categorizes the applications of the tool based on their risks and provides for harsh sanctions for violators, which can reach 35 million euros or 7% of the business volume or, in the lowest case, 7.5 million or 1.5% of it. The EU establishes a transition period before its final application in 2026, a time that companies will have to use to ensure that their developments comply with the law. Giants such as IBM, Intel or Google, favorable to regulation, have developed platforms and systems to guarantee that, since it is unstoppable, artificial intelligence is developed with ethical, transparent and bias-free criteria. In this way, companies offer formulas to comply with the first artificial intelligence law, the European AI act.
The technology consultancy Entelgy points out three keys that companies must take into account: those that manage personal, medical, recruiting or decision-making data must disclose how the algorithm works and the content it generates in a European registry; Although it is not mandatory, it is recommended to establish human supervision mechanisms; and large language models (LLMs) will have to introduce security systems and developers will have an obligation to be transparent about the copyrighted material they use.
“We must ensure that the technology we develop is created responsibly and ethically from the beginning. “It is a great opportunity, but it also poses challenges,” he warns. Christina Montgomery, vice president and chief privacy and trust officer at IBM. Unlike other companies in favor of free development (150 executives of European companies have positioned themselves against the norm), IBM is committed to “intelligent regulation that provides protective barriers for society, while promoting innovation.” Intel, another giant in the sector, agrees, according to Greg Lavender, Chief Technology Officer of this company: “Artificial intelligence can and should be available to everyone so that it can be deployed responsibly.”
Both companies have developed their own platforms to ensure this development in accordance with standards that, little by little, both governments and companies consider necessary.
IBM's solution is Watsonx.governance a platform that includes ethical data handling, risk management and regulatory compliance. “It has been developed to help organizations apply AI responsibly, adhere to today's policies, and be ready for tomorrow's regulation,” Montgomery details.
82% of company leaders have adopted or implemented AI or plan to do so in the next year
Survey of European business leaders
Ana Paula Assis, president and general director of IBM for Europe, the Middle East and Africa, defends the need for these tools based on a survey of 1,600 business leaders from Germany, France, Italy, Spain, Sweden. According to the results, 82% of company managers have adopted or implemented AI or plan to do so in the next year and almost all (95%) do so or are going to do so because it is effective in decision-making, in management and commercial strategy. According to Hazem Nabih, technology director for the Middle East at Microsoft, “the productivity of any company increases between 30% and 50%.”
But this tool with enormous potential faces challenges: a necessary ethical framework, the need for new skills and the increase in costs so that its development, in addition to being effective, is fair (without bias) and transparent (explainable and measurable), as well as that guarantees security and privacy.
IBM's proposal is that it can be used by any company, regardless of the computer model implemented, both open source and those developed individually or by other companies. “Our strategy and our architecture is open, hybrid and multi-model in the sense that, really, we are giving clients the flexibility to implement our solutions in the environments that work best for them,” explains Assis.
The solution proposed by the other giant is Intel Trust Authority and starts from a similar philosophy: “An open, developer-focused ecosystem to ensure that the opportunities of artificial intelligence are accessible to everyone.” “They are tools that streamline the development of secure AI applications and facilitate the investment necessary to maintain and scale these solutions in order to bring AI everywhere,” according to the company's chief technology officer.
“If developers are limited in their choice of hardware [equipos] and software [programas]the range of use cases for AI adoption on a global scale will be restricted and likely limited in the social value they are able to deliver,” explains Lavender.
Intel's strategy is not only aimed at large companies. He has also launched, during the Innovation 2023AI PC Acceleration Program, an initiative designed to accelerate the pace of development of artificial intelligence in the field of personal computers (PC).
The program aims to connect independent providers of hardware and of software with Intel resources, including AI tools, co-engineering, teams, design resources, technical expertise and commercialization opportunities. “These resources will help accelerate new use cases and connect the industry in general with AI solutions,” defends the company. Program partners include Adobe, Audacity, BlackMagic, BufferZone, CyberLink, DeepRender, MAGIX, Rewind AI, Skylum, Topaz, VideoCom, Webex, Wondershare Filmora, XSplit and Zoom.
We have a full set of controls to ensure that for businesses using Vertex, your data is yours and no one else's. They are not filtered, they are not shared with anyone, not even with Google
Thomas Kurian, head of Google Cloud
Google has developed specific protection systems for Gemini, its latest artificial intelligence model, for aspects such as the protection of personal data that the new standard will require. “We have a full set of controls to ensure that, for businesses using Vertex AI, your data is yours and no one else's. They are not leaked, they are not shared with anyone, not even Google. There is a long set of controls and compliance and audit capabilities that Vertex provides,” explains Thomas Kurian, director of Google Cloud in the presentation of the Gemini developer tools.
Biases
One of the biggest challenges is biases, the deficiencies included in the algorithms that can spread throughout the artificial intelligence system, underestimating the complexity of human beings. In this sense, two articles by researchers from Sony and Meta presented at the International Conference on Computer Vision (ICCV for its acronym in English), they propose ways to measure biases to verify the diversity of data that not only serves to make decisions but also to train machines.
William Thong, AI ethics researcher at Sony, explains in MIT Technology Review about their proposal: “It is used to measure bias in computer systems, for example by comparing the accuracy of AI models for light- and dark-skinned people.”
Sony's tool has expanded the scale of skin tones recognizable by the computer to not only look at whether it is light or dark, but also the shades of the different colors.
To streamline bias assessments, Meta has also developed the tool Fairness in Computer Vision Evaluation (FACET). According to Laura Gustafson, an AI researcher at the company, the system is based on 32,000 human images labeled by people based on 13 perceptible parameters, such as age (young or old), skin tone, gender, hair color and texture, among others. others. Meta has put his data available for free online to help researchers.
Widespread uncontrolled use
The importance of caution is highlighted by a recent report of the security firm Kaspersky carried out among Spanish managers and which reveals that 96% of those surveyed in Spain admit regular use of generative artificial intelligence among their employees without measures to avoid its risks in almost half of the entities (45% ). According to another study from the same company, 25% of those who use generative AI are unaware that it can store information such as IP address, browser type and user settings, as well as data on the most used functions.
“Generative artificial intelligence systems are clearly growing and the longer they operate unchecked, the more difficult it will be to protect areas of the business,” warns David Emm, principal security analyst at Kaspersky.
You can follow EL PAÍS Technology in Facebook and x or sign up here to receive our weekly newsletter.
Subscribe to continue reading
Read without limits
_
#Large #companies #ways #ensure #ethical #legal #development #artificial #intelligence