The American Sam Altman is executive director of OpenAI, a research company responsible for ChatGPT. At 38 years old, he is considered the most authoritative voice on artificial intelligence (AI) issues. Below are some excerpts from what he said yesterday at Davos:
A very good sign about this new tool is that Even with its current very limited capabilities and very deep flaws, people are finding ways to use it to make big productivity gains, or other gains, and understand its limitations. A system that is sometimes right, sometimes creative, and often totally wrong, you don't actually want driving your car. But you're happy for me to help you think about what to write or help you with code you can check.
The model style OpenAI He's good at some things, but not at life or death situations. People understand tools and their limitations more than we often give them credit for. And people have found ways to make ChatGPT very useful to them and understand why not to use it, for the most part. So it's a very good sign that, even with the current extremely limited capability levels of these systems, many people have found ways to get value and also understand their limitations. I think AI has been demystified a bit because the general public is using it now and that's always the best way to advance the world with a new technology.
(You can also read here: 'The four key themes of the World Economic Forum, in Davos / Analysis by Ricardo Ávila').
Humans are quite forgiving of other humans who make mistakes, but they are actually not at all forgiving of computers that make mistakes. People say things like “self-driving cars are already safer than human-driven cars,” but they probably have to be safer by a factor of, I guess, ten to 100 times, before it's accepted, maybe even more. I think the same thing will happen with other AI systems, taking into account the fact that if the public knows, if they are used to using a tool and know that it can be completely wrong, it is manageable. The hardest part is when you're right 99.999 percent of the time and you let your guard down.
I can't look at your brain and look at the hundred trillion synapses to understand why you're thinking what you're thinking. What I can ask of you is that you explain your reasoning to me. And you can explain first this, then this, then this conclusion. And I can decide if that seems reasonable to me or not. And I think our AI systems will also be able to do the same. They will be able to explain the steps from A to B to us in natural language, and we will be able to decide if we think they are good steps. We will be able to do more to x-ray the brain of an AI and understand what those connections are.
(You may also be interested in: 'President Petro denounces a case of harassment of his daughter in a theme park in Florida').
We will do things with better tools and I admit that this time it feels different. General purpose cognition feels so close to what we all treasure about humanity, that it feels different. There will be human roles where you want another human, but even, when I think about my job, I'm certainly not a big AI researcher. My role is to figure out what we're going to do, and work with other people to coordinate and make it happen, and I think everyone's job will look a little more like that. We will operate at a slightly higher level of abstraction, have access to many more capabilities and still make decisions. They may trend more toward curating over time, but we will make decisions about what should happen in the world.
RICARDO ÁVILA – SENIOR ANALYST – SPECIAL ENVY OF EL TIEMPO
More news here:
#39OpenAI39s #style #good #life #death39