Tech industry leaders have expressed support for the need to regulate artificial intelligence, but they are also lobbying hard to ensure that new rules work in their favor.
That doesn’t mean everyone wants the same thing.
Facebook parent Meta and IBM today launched a new group called AI Alliance, which defends an “open science” stance for the development of AI, which puts them in conflict with rivals Google, Microsoft and OpenAI, creator of ChatGPT.
These two opposing camps — the closed and the open — disagree over whether AI should be built in a way that makes the underlying technology widely accessible. Security is at the center of the debate, but also who can benefit from AI advances.
Open source advocates favor an approach that is “not proprietary and closed,” says Darío Gil, a senior vice president at IBM who heads its research department. “So that it’s not something that’s locked in a barrel and no one knows what it is.”
What is open source AI?
The term “open source” derives from a decades-long practice of building computer programs whose code is free or widely accessible for anyone to examine, modify, and develop.
Open source AI involves more than just code, and computer scientists disagree on how to define it based on what components of the technology are publicly available and whether there are restrictions limiting its use. Some use the term “open science” to describe a broader philosophy.
The AI Alliance—led by IBM and Meta and involving Dell, Sony, chipmakers AMD and Intel, and several universities and AI startups—”is organizing to make the case, in a nutshell, that the future of AI It will be built fundamentally on the open scientific exchange of ideas and on open innovation, including open source and open technologies,” Gil said in an interview with The Associated Press before his presentation.
Part of the confusion about open source AI is because, despite its name, OpenAI—the company behind ChatGPT and DALL-E, the imaging tool—builds AI systems that are decidedly closed.
“To state the obvious, there are commercial and short-term incentives against open source,” said Ilya Sutskever, chief scientist and co-founder of OpenAI, in a video interview hosted by Stanford University in April. But there is also a longer-term concern : the possibility of an AI system with “mind-blowingly powerful” capabilities that would be too dangerous to make accessible to the public.
To illustrate the dangers of open source, Sutskever posed the hypothetical example of an AI system that learns to run its own biological laboratory.
It’s dangerous?
According to David Evan Harris, a researcher at the University of California at Berkeley, even current AI models pose risks and could be used, for example, to intensify disinformation campaigns aimed at disrupting democratic elections.
“Open source is really a marvel in many dimensions of technology,” but AI is different, Harris said.
“Anyone who has seen the movie ‘Oppenheimer’ knows this, that when great scientific discoveries are being made, there is every reason to think twice about sharing the details of all that information widely so that it can end up in the wrong hands.” , said.
The Center for Humane Technology, a longtime critic of Meta’s social media practices, is among the groups calling attention to the risks of open source or leaked AI models.
“As long as there are no safeguards, it is totally irresponsible to make these models available to the public,” says Camille Carlton of the group.
Are you spreading alarmism?
An increasingly public debate has emerged about the benefits or dangers of taking an open source approach to AI development.
Yann LeCun, chief AI scientist at Meta, attacked OpenAI, Google and startup Anthropic on social media this fall for what he described as “massive corporate lobbying” to write standards in ways that benefit their business models. High-performance AI and can concentrate their power over the development of the technology. The three companies, along with key OpenAI partner Microsoft, have formed their own industry group called Frontier Model Forum.
LeCun said on
“In a future where AI systems are about to be the repository of all human knowledge and culture, we need platforms to be open source and freely accessible so that everyone can contribute to them,” LeCun wrote. “Openness is the only way to make AI platforms reflect the entirety of human knowledge and culture.”
For IBM, an early proponent of the open-source Linux operating system in the 1990s, the dispute is part of a much longer competition that predates the rise of AI.
“It’s kind of a classic regulatory capture approach to try to raise fears about open source innovation,” said Chris Padilla, who leads IBM’s global government affairs team. “I mean, this has been Microsoft’s model for “Decades, right? They were always opposed to open source programs that could compete with Windows or Office. Here they are taking a similar perspective.”
What are governments doing?
It was easy to overlook the “open source” debate in the discussion surrounding President Joe Biden’s sweeping executive order on AI.
The decree described the open models under the technical name “dual-use foundational models with widely available weights” and said they needed further study. Weights are numerical parameters that influence the performance of an AI model.
When these weights are published on the internet, “there may be substantial benefits for innovation, but also substantial risks for security, such as the elimination of protective measures within the model,” says the order from Biden, who gave until July to US Secretary of Commerce Gina Raimondo to consult experts and present recommendations on how to manage potential benefits and risks.
The European Union has less time to solve it. In negotiations that came to a head Wednesday, officials working to finalize approval of a global AI regulation are still debating a series of provisions, including one that could exempt certain “free and open source AI components.” ” of regulations affecting business models.
#Tech #giants #disagree #future