In all my years in the legal world, I have never seen anyone use the “I’m crazy, stop me” argument to avoid regulation. “Mitigating the risk of extinction of AI should be a global priority, along with other risks on a societal scale such as pandemics and nuclear war”, sign and affirm Demis Hassabis, CEO of Google DeepMind, or Sam Altman, CEO of OpenAI – bowling across Europe – among a long list of AI managers and scientists in a statement published on the website of the Center for AI Safetya non-profit organization from which it is not possible to know from its website when it was established or who pays the bills.
I read this and I can only think, well turn it off and let us continue with our lives from choice to choice, that we have enough with our daily existential risks to deal with the end of the world at the hands of Skynet. If we apply the state of alarm to air traffic controllers to go on vacation, I don’t know what Biden does that doesn’t send the National Guard to the OpenIA headquarters to turn off the servers. If Bruce Willis were in charge, he would have stopped all this nonsense already, but it seems that we only have bureaucrats naive victims of the tocomocho.
Let’s apply the logic of impending collapses. In the case of the nuclear threat, the risk is mitigated with the balance of forces, with the principle of mutual assured destruction. I like principles like this, based on a deep knowledge of human nature, the kind that appeal to the amygdala and work well in case of conflict. The risk of a pandemic, as we have experienced in our meats, is multifactorial in its causes and in those responsible, so it is difficult to mitigate. So, since we are not able to put in place effective measures to limit its occurrence (although there are), what we do is react by developing new vaccines that, if they work properly, will mitigate the risk of a pandemic happening again.
With their flaws, these risk mitigation systems seem rational, based on experience and science. But when we come to how to solve the existential problems of a murderous AI instead of doing the logical thing, turning it off or disconnecting it from the internet, prohibiting it from operating or throwing it away, our reaction is to audit it, albeit voluntarily, don’t get mad HAL ahead of time and sends us a rain of fire and killer frogs that makes us the protagonists of the twenty-fourth series of zombies. I admit that this logic or its absence is beyond me. And I admit that I recognize it because the heads of the technology companies are very clever tricksters who have been fighting us for a few decades.
Before, it was enough for them to put “innovation” at the beginning of the sentences to convince rulers who were more interested in conquering the battlefield than in the rights of citizens. The logic of letting grow and then regulating is found in all industrial and social revolutions, and therefore has a rational basis even though it was misguided in this one. In the rational risk analysis, economic growth, control of key sectors and positioning the US at the forefront, well above individual rights, were weighted more. I don’t like it, they have made a mistake in the long-term calculation, but it responds to a rational methodology. When we think about existential risks, we prefer that everyone leave their emotions at home.
Now that regulators and governments are not, with respect to technology, in the expansionist cycle, but in the regulatory one, the technomoguls They know that innovating is much more complicated. And that’s a nuisance when you’ve spent your money on a technology that grows slowly but explodes and expands rapidly. Too bad, Sam Altman must have thought, that I wasn’t caught by the outbreak of generative AI in the early 2000s. What times to do what one wanted. So, smart boy, you’ve shocked us with shock doctrine and gone so wide. Naomi Klein studied it very well in her play of the same title: If you’re scary enough, they’ll let you do the unthinkable. And she exemplified it in her work with the disaster caused by the application of fright or death in Pinochet’s Chile, a test laboratory for the ultraliberal economic policies of Friedman and the Chicago School. no matter how many minority blog explanations are given to explain how they are distracting us from what is important. The scare is given and Bruce neither comes nor is expected.
You can follow THE COUNTRY Technology in Facebook and Twitter or sign up here to receive our weekly newsletter.
Subscribe to continue reading
Read without limits
#Artificial #intelligence #Bruce #Willis