Hod Lipson, a mechanical engineer who directs the Creative Machines Laboratory at Columbia University in New York, has shaped most of his career around what some people in his industry have called “the word that starts with.” c”.
One morning in October, the Israeli-born roboticist sat behind a table in his lab and explained it himself. “This topic was taboo,” he said with a smile.
“It was almost forbidden to talk about it—’don’t talk about the c-word; they won’t give you a permanent job’—so, at first, I had to hide it, as if it were something else”.
That was in the early 2000s, when Lipson was an assistant professor at Cornell University in New York. He worked to create machines that could sense when something was wrong with his hardware—a bad part or faulty wiring—and then change his behavior to make up for that deficiency without a programmer.
That kind of built-in adaptability would become more important as our reliance on machines grew, Lipson argued.
The robots were being used in surgical procedures, food production, and transportation; the uses of the machines seemed to be almost endless, and any mistake in their operation, as they became more integrated into our lives, could spell disaster.
“We are literally going to give our lives to a robot,” he said. “You want these machines to be tough.”
One way to achieve this was by taking inspiration from nature. Animals, particularly humans, are good at adapting to change. This ability could be the result of millions of years of evolution, as resilience in response to injury and changing environments typically increases an animal’s chances of surviving and reproducing.
Lipson wondered if he could reproduce this kind of natural selection in his code, creating a generalizable form of intelligence that could learn things about his body and function regardless of what that body looked like and what that function was.
This type of intelligence would be flexible and fast. As machine learning became more powerful, this goal seemed more achievable.
Lipson did become a regular teacher, and as his reputation grew, he started saying the c-word out loud: he wants to create conscious robots.
“This is more important than curing cancer,” he said. “If we can create a machine that has consciousness on par with a human, it will dwarf everything else we’ve ever done. That machine alone will be able to cure cancer,” he said.
The first difficulty in studying consciousness is that there is no consensus on what it is really about. That’s the case with many vague concepts, like freedom, love, and existence, but that domain is supposed to be reserved for philosophers, not engineers.
Some people have tried to explain it by pointing to functions in the brain or some more metaphysical substances, but these efforts raise more questions.
Wading in these murky waters might seem barren to robotics experts and computer scientists. But as Antonio Chella, a robotics specialist at the University of Palermo in Italy, said, unless consciousness is taken into account, “it feels like something is missing” in the function of intelligent machines.
Lipson has settled on a practical criterion for consciousness: the ability to imagine oneself in the future.
He believes that the fundamental difference between different types of consciousness—of a human, an octopus, a rat, etc.—is how far into the future an entity is able to imagine itself.
Consciousness exists on a continuum. At one extreme is an organism that has an idea of its place in the world—a kind of primitive self-awareness. Beyond that is the ability to imagine where our body will be in the future, and beyond that is the ability to imagine what we might imagine over time.
“So eventually these machines will be able to understand what they are and what they think,” Lipson explained. “That leads to feelings and other things.”
Chella believes that consciousness cannot exist without language, and has been developing robots that can form internal monologues, reasoning with themselves and reflecting on the things they see around them.
Joshua Bongard, a robotics specialist at the University of Vermont and a former member of the Creative Machines Laboratory, believes that consciousness consists not only of awareness and mental activity, but that it has an essentially physiological aspect.
He has developed beings called xenobots, made entirely of frog cells wired together so that a programmer can control them like machines. According to Bongard, it’s not just that humans and animals have evolved to adapt to their environment and interact with each other; our tissues have evolved to support these functions, and our cells have evolved to support our tissues.
Last summer, a Google engineer claimed that the company’s newly improved chatbot, called LaMDA, was conscious and deserved to be treated like a little child. This statement was met with skepticism, especially since, as Lipson noted, the chatbot was processing “code that is written to complete a task.”
There was no underlying structure of consciousness, just the illusion of consciousness, other researchers said.
“We are literally going to give our lives to a robot. You want these machines to be tough.”
Hod Lipson
head of the Creative Machines Laboratory
By: Oliver Wang
BBC-NEWS-SRC: http://www.nytsyn.com/subscribed/stories/6532874, IMPORTING DATE: 2023-01-16 21:30:07
#Scientists #move #making #selfaware #robots