Regulate us now, say the heads of artificial intelligence. What we have in our hands is very big, revolutionary, so much so that the machines will displace us humans, they will enslave us, this can end in the extinction of the species. Regulate us now, they say, like arms companies: we want to be inspected, operate only with a license… Regulate us, they don’t say this, to raise barriers to small companies, or to collaborative projects that use open source; to avoid that each company or organization can have its own AI system if it has not gone through our box.
Here’s what’s happening: AI champions themselves—headed by the man of the moment: OpenAI creator Sam Altman, as well as the top executives of Google DeepMind and Anthopic—are in the greatest hurry to regulate their activity. With this, first of all, they give themselves a lot of importance: it is pure marketing. And that is still daring to call intelligence to what these algorithms do, and what is fed by us physical persons is not completely artificial. The dreaded Artificial General Intelligence, the one that will epitomize all of humanity’s knowledge and surpass all mortal capabilities, remains a dream (or nightmare) far away. But this field will jump fast, there’s no question about that.
We are on the way to what is called web3 (decentralized, democratic, free from the control of large corporations) the same thing happens to web2 (the one with social networks), which was also going to empower citizens and only reinforced the oligopoly of digital services. What has happened so far is the “winner takes all” effect, which in addition to a nice song by Abba is the rule that has led to an excessive concentration of power in a handful of companies. This is why, broadly speaking, Google dominates navigation; Amazon, electronic commerce; Microsoft, operating systems and programs for PCs; Apple, the chic segment of devices. Facebook (Meta) was one of those winners, almost hegemonic in social networks, but the emergence of others like TikTok and its foolish all-or-nothing bet on the metaverse have made it drop from the elite. Nvidia is now entering the group of billion-dollar companies, thanks precisely to its advances in AI.
What is at stake is who will be the winner take it all with the AI. Microsoft, with its alliance with OpenIA (creator of ChatGPT) is well placed; Google is waking up because its search business is under threat; and Nvidia claims its place among the greats with a less mediatic but very solvent track record in graphics processing and high-performance computing. That in the West: the Asian giants are going to have a good piece of the cake.
Should AI be regulated? Of course! Let’s not arrive as late as social networks, which are today a jungle. Laws and regulations must protect the rights and privacy of citizens, avoid massive and universal surveillance, prevent massive campaigns of disinformation and political manipulation that are more effective than those we already suffer, and stop discrimination. In particular, it will be necessary to regulate the protection of intellectual property, because the AI swallows all kinds of information, which is not its own, to make its own. Not only are creators’ copyrights at risk, already plagued by piracy around the turn of the century; your own data and your own personal image are yours, and an application should not be able to take them.
And one of the most delicate aspects to define is which decisions can be entrusted to an AI and which cannot: do we allow machines to decide on the selection of personnel, the granting of mortgages, the parole of a prisoner? Do we let military or police autonomous machines choose whether to shoot at a target? These are all very urgent discussions, and they must lead to quick decisions. But is it necessary to regulate that only a handful of large companies can operate with artificial intelligence, through licenses? Rather the opposite: legislation should stimulate competition, instead of repeating past mistakes.
Some say: we are not going to be able to regulate AI very much because not even its own engineers fully understand how a machine that learns by itself works. A weak argument: there is no need to go into the guts of very complex programs: just examine (evaluate, audit) their results. And, for the moment, a wit like ChatGPT surprises us for its more or less natural use of language (although it does it better in English), but for nothing else. He doesn’t give accurate information, he makes up a lot of what he says, he makes gross mistakes that would be unacceptable in any profession. And the AI, it is known, inherits human biases through the information and parameters that have been supplied: gender, ethnic, class prejudices and many more.
The catastrophizing that imagines a tyranny of machines in a dystopian future sounds terrifying, but it responds to more mundane interests. Because this debate about the apocalypse distracts us from the abuses that these still rudimentary technologies are already committing, including a not always evident extraction of other people’s talent. Let’s regulate the AI, of course we do. But not at the dictates of their owners.
Ricardo de Querol is the author of ‘The great fragmentation’ (Harp).
You can follow THE COUNTRY Technology in Facebook and Twitter or sign up here to receive our weekly newsletter.
Subscribe to continue reading
Read without limits
#Regulate #artificial #intelligence #fat