Shimon is the result of ten years of work. Or rather, it is the metallic embodiment of ten years of results. In this musician robot lines of research come together, ranging from machine learning to psychology, passing, of course, through music theory. The first thing he did was learn to play the marimba. But it didn’t stop there. He developed his own voice, began to move in time with the music, and gained the notions of improvisation and rap necessary to participate in cockfighting, among other skills. Behind his achievements are the researchers who have gone through the Georgia Tech Center For Music and, at the helm, Gil Weinberg (Israel, 1967), director and founder of this laboratory. “It is not usual for a robot to stay active so long and continue to develop new skills,” he reasons during a video call interview.
This researcher came to robotics from the framework of an early music education: after studying computer science and music, he enrolled in the Medialab of the Massachusetts Institute of Technology (MIT) to explore the union of his two careers. “I decided that if I really wanted to innovate in this field, I needed to know more,” he recalls. He has explored the possibilities of algorithmic thinking as a source of inspiration for creating new types of music, has devised new musical instruments, and can boast of having developed one of the first karaoke software in 1992.
All of the above done, he missed the acoustic sound. “There is something unique about the physical quality, the gestures, in looking at the other musician and being able to synchronize,” Weinberg notes. Thus began the robots. Hayle was the first of them. He had the ability to play the drum, listen to what other musicians were doing, and respond with chord rhythms. Surprising for a machine and enough to arouse the interest of the National Science Foundation, which saw in this project a nice way to bring robots closer to the general public; but still below the researcher’s expectations. “It didn’t have a lot of the elements that were important to me as a musician.”
A musical snowball
The answer to those shortcomings is what has been adding to Shimon’s résumé over the past decade. The automaton began to roll like an eight-armed marimbista capable of synchronizing his movements with the music, looking around him and making some gestures to introduce some emotion into his performances. Regarding the generation of melodies, the robot began to compose through a system based on rules derived from music theory. This made it possible, for example, to determine which notes can follow a certain combination. “It sounded similar to what we do to humans, somewhat different. Sometimes it was interesting and many others not so much, ”Weinberg admits. The next step was to train Shimon with a huge database of musical compositions. With this ocean of melodies, the machine achieved the ability to create unusual mixtures. “For example, take 30% from Thelonius Monk, 40% from Madonna and 20% from Bach,” says the researcher.
Movement is a key stop on the way to creating a musician robot. When it comes to his interactions with the instrument, Shimon and his four pairs of limbs capable of playing at speeds of more than 20 hertz – 20 impacts per second – already far exceed human capabilities. “It plays so fast you don’t even hear it as a beat. It is heard as a new sound color ”, explains Weinberg, who has already received music producers interested in recording those sounds electronically and humanly impossible to generate in his laboratory.
For body expression, the robot draws from psychology, but concentrates those teachings a narrow range of movements of the head, eyebrows and eyes. “Robots must have the appearance of robots, they do not need to look human,” says the expert. On his tours with Shimon, which have included performances at dozens of conferences and festivals, Weinberg has seen audiences respond better to this part of the performance than to the content of the songs: “You can always make the robot faster or improve the algorithm. to make it more interesting musically, but very few people are going to notice. The public connects with little things that are not even musical ”.
This effectiveness has given rise to new lines of research that explore the possibility of using the gestures and intonation of Shimon and other robots to establish relationships of trust with their human interlocutors and thus overcome the atavistic fear of these autonomous machines. “If the robot is able to communicate its mood, state and intentions through certain sounds, people trust more”, explains the researcher. At the moment, they have already published some studies that confirm that incorporating these modes of expression improves the acceptance and perception of the robot as an animated entity, and they are looking for ways to scale these exchanges to interactions between groups.
The more advanced Shimon’s abilities are, the more computing power they demand. This explains, for example, why the voice he uses on his album and his performances – developed with the collaboration of the Pompeu Fabra University Musical Technology group – is not the same one he uses in his rap improvisations. “It sounds more like Alexa rapping,” admits Weinberg, who is confident that advances in processing units will allow them to integrate more functionality into melodies generated in real time.
Despite the road traveled, there are still competitions beyond the reach of this eight-armed robot. “I have not yet heard a song composed by artificial intelligence that is capable of making me cry,” summarizes the researcher. “What makes a Beatles tune so catchy and one by another author not so catchy? We do not know”. As researchers scramble to find that secret ingredient, Weinberg doesn’t think we’ll find an answer anytime soon. But it doesn’t worry him. “I think the challenge is to get through robotics elements that humans do not have. Enrich and expand. Not to replace us ”.
In line with that collaborative vision, the Georgia Tech Center For Music is working on the development of a digital audio composition tool in which the human receives recommendations based on a machine learning system. “You can use them or not, and the system learns from what you like and what you don’t like,” says the laboratory director. Shimon’s advances also translate into nearby fields, such as the development of prostheses for percussionists and pianists, and others further afield. “Imagine an astronaut in a spaceship trying to fix something and a robot giving you the right tool at the right time,” Weinberg proposes.
#eightarmed #singing #robot #machines #capable #making #music