Artificial intelligence is not like the atomic bomb

At the beginning of 2023 the Future of Life Institute published a letter asking the world’s artificial intelligence laboratories to immediately suspend for at least 6 months the training of new and larger AI models than those already existing at the time, with OpenAI’s GPT-4 as a reference. The letter urged governments to intervene and institute a moratorium. It was signed by thousands of people, with names as relevant as Yoshua Bengio, Turing Prize winner in 2018, Steve Wozniak, one of the founders of Microsoft, or the famous historian, thinker and writer Yuval Noah Harari. I published my opinion on the matter shortly after in this same medium, so I am not going to comment on it here.

The letter did not warn of an imminent existential danger for the human species, but there are many people who think that this could happen and that AI could become beyond our control, with serious consequences for humanity. In the summer of 2022, the statements of a Google engineer, Blake Lemoine, were highly commented, who stated that the LaMDA model (Language Model for Dialogue Applications) developed by his company, was “sentient” and, therefore, his “desires” should be respected. Google denied such claims and fired him – Lemoine, not LaMDA, who was neither sentient nor under contract to the company.

We might think that Lemoine was perhaps not in his right mind if he really believed what he said, but apparently informed and clairvoyant people, such as Sam Altman, CEO of OpenAI, creator and marketer of ChatGPT, stated in May 2023 that his worst fear was that AI goes wrong, since “if it goes wrong, it can go very wrong.” Beyond the fact that it seems like a phrase from Rajoy in his moments of greatest linguistic creativity, I don’t know if Altman was referring to things going badly for everyone or for OpenAI, which this year will close the year with some $5 billion in losses. If anything, he seemed genuinely concerned when he said it.

See also  How pensions and the State reduce poverty in Spain (and not only for retirees)

Although these and other voices warn us from time to time about AI and point to possible existential risks for humanity if we continue to develop it, there is nothing on the foreseeable horizon, certainly not in the short or medium term, that makes us think that this could be so. AI does not have the risks of nuclear energy, especially when almost everything about it was unknown. Let’s go back to the Manhattan Project, created to develop the atomic bomb. Before carrying out the first test, known as “Trinity,” there was fear among some of the scientists involved in the project that the explosion of the bomb could cause the nitrogen nuclei in the atmosphere to melt, causing a chain reaction that would destroy life on our planet. Even so, the test was carried out on July 16, 1945. Theoretical calculations indicated that this “atmospheric fire” was highly unlikely, but until the actual test was carried out, the doubt and fear were not completely cleared up.

I do not want to underestimate the consequences of an AI that gets out of hand, to use a colloquial expression, but it will not be because the AI ​​becomes conscious. Nor because it is going to subjugate us, as we have done with the rest of living beings. Certainly not yet. The problems we face with the development and use of AI are very different, apparently less transcendent, but they are worrying because they are real and current. Problems that have at least a couple of things in common: it is in our hands to minimize them, if not eliminate them; and in general they are not exclusive to AI nor has it created them, although it can amplify and accelerate their impact. A frequently talked about example is AI biases. It is not a new problem, although it may now become evident and worry us more. In fact, there has been talk about the biases of Internet content for many years. The Internet reflects a partial reality, closely identified with the profile of the companies that control AI and rich countries, particularly those where English is spoken. The pollution and even the toxicity of Internet content is not new, nor is the systematic violation of copyright, bringing to the table other issues that are of particular concern and that the emergence of generative AI seems to amplify.

See also  Perez in the wake of Verstappen: 'copy' the setups, no more experiments

Smart technologies have a great social and economic impact, and not everything is good or everything goes when we talk about developing and applying them. Furthermore, there is an enormous urgency among companies to make profitable the enormous investments in AI that are being made, which may tempt them to skip steps between the research laboratory and the market. We must keep this in mind and be attentive to what is happening, and even try to anticipate possible evils and excesses. Just as the research, development, manufacturing and marketing of a medicine follows strict laws and controls, it must be the same with AI, especially with potentially high-risk systems, because there are some.

#Artificial #intelligence #atomic #bomb

Next Post

Leave a Reply

Your email address will not be published. Required fields are marked *

Recommended