For a new study at some point, AIs risk being simply too much for humans.
That of artificial intelligences (IA) and super smart machines that could feed is a very interesting sector and for some time now the scene of many debates. Beyond the easy Skynet joke, the possible dangers of progress in this area arouse as much interest as fear.
Advising you to take a look at our article “I am an Artificial Intelligence and I am here to show you that AI will not destroy humanity”, we are talking about a study conducted by researchers at Berlin’s Institute for Human Development, a study that as explained by co-author Manuel Cebrian tried to understand a very interesting aspect of our possible relationship with AI and super intelligent machines.
The question is apparently as simple as the complex answer: would humanity be able to contain a hypothetical super intelligent AI? Short answer: no.
“We concluded that total containment is in principle impossible due to the inherent limitations of computing itself. Assuming that a super intelligence contains a program that includes all programs that can be executed by a potentially complex universal Turing machine such as the world itself, the tighter containment would require simulations of this program, something theoretically and practically impossible. “
If pure containment seems impossible, researchers have focused on building a theoretical containment algorithm that makes sure super intelligent AI can’t harm humans. Something not too different from the famous ones three laws of Asimov’s robotics. Again, the current paradigm underlying today’s computing would be an obstacle and such an algorithm would not be feasible.
“If we limit the problem to the basic rules of theoretical computer science, we realize that an algorithm that commands an AI not to destroy the world could inadvertently stop its operations. If it were to happen, we would not know if the containment algorithm is still analyzing the threatens or has stopped to contain the malicious AI. This effectively renders the containment algorithm unusable. “
No algorithm can determine whether an AI would do something “bad” or not, we would not even be able to know when an AI is thinking of doing something bad and according to research we would not even be able to know when we will be dealing with a super intelligent AI since understanding it is in the same realm as the containment problem.
Everything about AI continues to seem equally fascinating, inscrutable and terrifying.
Source: ZME Science
#Artificial #intelligence #super #intelligent #machines #impossible