Mr. Bengio, you are one of the most cited AI researchers in the world. Together with prominent colleagues, you have just warned of “society-wide risks” from the technology. What do you mean by that?
There are a wide range of risks that people have been warning about for years: discrimination, privacy, impact on the job market. But with increasingly powerful AI systems, we are now reaching a point where new dangers threaten. It is not clear that our society is ready for this to happen in the next few years or the next decade. What happens if in a few years these systems can carry out cyber attacks that overcome our best defenses? Or if someone without much prior knowledge of science can use AI to build a biological or chemical weapon? AI could also disrupt our democracy or the financial markets.
Sounds pretty bleak.
I’m not saying any of this will happen. But we are working with a lot of unknowns here. And the large AI systems are in the hands of very few companies. Many AI applications are completely harmless. It is the large systems that pose the dangers.
Isn’t it very difficult to regulate something that we don’t yet know in which direction it is developing – as if someone in the 90s tried to predict the Internet of today?
No not at all. There are three or four companies that control these large models. We could require them to register, which is associated with certain obligations: If they cannot prove to the regulator that they are doing enough to ensure security, then the system cannot be used. It’s easier than often thought to control these systems because training requires such massive amounts of hardware. There are only a few companies in the world that produce the necessary chips for this.
Do you want AI companies to invest more in security?
The pharmaceutical industry spends 97 percent of its costs on patient safety. We require AI companies to dedicate 30 percent of their investments to protecting the public. That is still much less than with medication. But much more than now in the AI industry, where the value is one or two percent.
Recently, the development of AI models seems to have slowed down somewhat; the first big hype is over. That doesn’t mean there’s much to worry about.
I’ve been in this business for 30 years and I can tell you: it’s impossible to know what will happen next. The progress of the past few years has been unexpected. Even the people who build these systems didn’t expect this. You can’t predict exactly what will happen in three, six or ten years. But based on my own research, I believe it’s not just the magnitudes that will change. At some point there are limits. Algorithms will change and this could have a radical impact on the capabilities and efficiency of the systems.
The European Union is making its first attempt to regulate the models with its planned AI law. Is this the right way?
It’s a step in the right direction. But I fear that the law will not be flexible enough. Technology will continue to develop, there will be new applications that could harm humanity. If the law has to be adjusted every time, it won’t work. We need to structure the rules to establish principles at a higher level. The regulatory authorities must then have enough freedom to adapt the rules, similar to the aviation authorities: If a plane crashes, they can immediately say that this model is no longer allowed to fly.
What do sensible security measures actually look like? Do they have to be programmed directly into the models?
No, we cannot program security into it. We wouldn’t even know how that would work. All we can do is train and adapt the models so that they don’t do things that we think are harmful. This technology is also far from perfect, so it needs to be improved. Then we have to develop all possible tests – and commission independent groups to find vulnerabilities in the system. This is known in the professional world as “Red Teaming”. This is not a perfect solution either; perhaps these groups cannot find the fatal vulnerability. The situation we are in is not satisfactory. But we should at least do what we can.
#pioneer #Bengio #build #chemical #weapon