Can private companies that are pushing the boundaries of a revolutionary new technology be expected to behave in the interest of their shareholders and at the same time the world at large? When we were recruited to OpenAI’s board of directors (Tasha in 2018 and Helen in 2021), we were cautiously optimistic and believed that the company’s innovative approach to self-governance could offer a model for the responsible development of artificial intelligence (AI). However, our experience leads us to consider that self-government is not capable of firmly resisting the pressure of lucrative incentives. Given the enormous potential for AI to have both positive and negative impacts, it is not enough to assume that such incentives will always be aligned with the public good. For the rise of AI to benefit everyone, governments must now begin to establish effective regulatory frameworks.
If there was a company that could successfully lead itself while safely and ethically developing advanced AI systems, it was OpenAI. The organization was originally formed as a nonprofit with a laudable mission: to ensure that artificial general intelligence (AGI; AI systems that are typically smarter than humans) benefited “all of humanity.” Later, a for-profit subsidiary was created to raise the necessary capital, but the nonprofit remained in charge. The stated purpose of that unusual structure was to protect the company’s ability to stick to its original mission, and the board’s mandate was to maintain that mission. Something like this was unprecedented, but it seemed worth a try. Unfortunately, it didn’t work.
For the rise of AI to benefit everyone, governments must now begin to establish effective regulatory frameworks
Last November, in an effort to safeguard that self-regulatory structure, OpenAI’s board ousted its CEO, Sam Altman. The board’s ability to uphold the company’s mission had been increasingly limited by patterns of behavior displayed time and time again by Altman that, among other things, we believe undermined the board’s oversight of decisions. password and internal security protocols. Multiple executives had privately shared their serious fears with the board and stated that, in their opinion, Altman cultivated “a toxic culture of lies” and engaged in “ [que] “They can be described as psychological abuse.” According to OpenAI, an internal investigation determined that the board had “acted within its broad powers” in deciding the dismissal, but also concluded that Altman’s conduct did not “necessarily lead to dismissal.” OpenAI provided little concrete data to justify that conclusion. and did not make the investigation report available to employees, the press or the public.
The question of whether such behavior should generally “necessarily lead to the dismissal” of a CEO is a debate for another time. However, in the specific case of OpenAI, given the board’s duty to provide independent oversight and protect the company’s public interest mission, we support the move to remove Altman. We also think that what has happened since his return to the company (including his reinstatement to the board and the departure of prominent security stakeholders) bodes poorly for OpenAI’s experiment in self-governance.
Our particular story offers the fundamental lesson that society should not let the deployment of AI be controlled solely by private technology companies. There is no doubt that there are numerous genuine efforts in the private sector aimed at responsibly guiding the development of this technology, and we applaud them. Now, even with the best intentions, that type of self-regulation will end up being inapplicable in the absence of external oversight; and this, due, above all, to the pressure of the immense incentives of profits. Governments have to play an active role.
In recent months we have seen a growing chorus of voices – from Washington lawmakers to Silicon Valley investors – advocating for minimal public regulation of AI. Parallels have often been drawn with the laissez-faire adopted in relation to the internet in the 1990s and the economic growth that approach stimulated. However, such an analogy is misleading.
Within AI companies and throughout the community of researchers and engineers in the field, there is widespread recognition of the high stakes (and high risks) in the development of increasingly advanced AI. In Altman’s own words: “The successful transition to a world with superintelligence is perhaps the most important (and hopeful and terrifying) project in human history.” The level of concern expressed by many high-level scientists involved in AI regarding the technology they themselves are building is well documented and differs greatly from the optimistic attitudes of the programmers and network engineers who developed the first Internet.
It is far from clear that minimal internet regulation has been of unblemished benefit to society. It’s true that many successful tech companies (and their investors) have benefited greatly from the lack of restrictions on online commerce. Less obvious is that societies have found the right balance when it comes to establishing regulation to curb false and disinformation on social media, child exploitation and human trafficking, as well as a growing youth mental health crisis.
Regulation has its drawbacks: if poorly designed, it can place a disproportionate burden on smaller companies.
Assets, infrastructure and society improve with regulation. Thanks to regulation, cars have seat belts and airbags, we don’t worry about whether milk is contaminated or not, and buildings are built so that they are accessible to everyone. Sensible regulation would ensure that the benefits of AI are achieved responsibly and more widely. A good starting point would be policies that allow governments greater visibility into how cutting-edge AI is progressing, such as transparency requirements and incident tracking.
Of course, regulation has its obstacles and they must be managed. Poorly designed regulation can place a disproportionate burden on smaller companies, stifling competition and innovation. It is crucial that policymakers act independently of major AI companies in developing new standards. They must be attentive to legal loopholes, pits regulations that protect early adopters from competition from latecomers and the potential for regulatory takeover. Indeed, Altman’s own calls for AI regulation must be understood in the context of these obstacles having their own ends. An adequate regulatory framework will require agile adjustments, keeping pace with the growing global understanding of AI capabilities.
Ultimately, we believe in the potential of AI to boost human productivity and well-being in ways never seen before. However, the path to that better future is not without dangers. OpenAI was founded as a bold experiment to develop increasingly capable artificial intelligence while prioritizing the public good over profit. Our experience is that, even with all the advantages brought to bear, self-governance mechanisms such as those employed by OpenAI will not be sufficient. Therefore, it is essential that the public sector is closely involved in the development of technology. The time has come for public bodies around the world to assert themselves. Only through a healthy balance between market forces and prudent regulation can we safely ensure that the evolution of AI truly benefits all of humanity.
Helen Toner and Tasha McCauley served on the OpenAI board of directors from 2021 to 2023 and 2018 to 2023, respectively.
—————————–
© 2024 The Economist Newspaper Limited. All rights reserved
Translation: Juan Gabriel López Guix
#companies #govern #OpenAI #executives