The current juncture is “one of those moments that everyone wants to be part of,” says Ben Zevenbergen, Google’s ethics advisor. And so Google’s founders Larry Page and Sergey Brin – normally only active in the background at Google – recently joined the important meetings about Google’s AI strategy. “Entire disciplines can be reinvented by artificial intelligence,” says Zevenbergen, during a conversation at Google’s office in Amsterdam. “It’s such an interesting moment that everyone wants to join the conversation.”
Since 2020, Zevenbergen has been involved in his role as an advisor in all important product launches of the American tech company. In addition, he is responsible for “the opposition,” says Zevenbergen. He has to make sure that what Google’s engineers develop is also in line with the AI principles established by the company. In it, Google says, among other things, that all its AI products must “benefit society” and must not lead to bias or privacy violations.
Zevenbergen was one of the authors of it LaMDA paper, which appeared in February last year and in which Google first presented its language model. This technology forms the basis of the first version of Google’s chatbot Bard, which was released in the United Kingdom and the United States in March. From this Thursday, the chatbot will also be available in the Netherlands and other EU countries.
The European launch of Bard was postponed after questions from the Irish regulator Data Protection Commission (DPC), which monitors the enforcement of European privacy rules at major technology concerns. The DPC found that Google had provided “insufficient documentation” to ensure user privacy. Google has now complied with the EU’s conditions. Among other things, users will have the option to view their activity in Bard or to delete it from their account.
‘Internal Panic’
Bard is the result of years of research and experimentation in AI technology. In addition, Google had an unthreatened position for a long time. His researchers developed the first in 2017 transformers. These super-smart neural networks are crucial for speech computers and chatbots. Until then, computers were especially good at literally translating texts from one language to another. Transformers also allowed computers to understand the context of sentences and became able to hold conversations.
In the end, it was not Google, but OpenAI from San Francisco that released the first of a new generation of AI chatbots with ChatGPT in November last year. ChatGPT – based on Google’s transformer technology – was the first to write plays, compose poems, produce and program texts for websites.
ChatGPT unleashed a worldwide hype. The underlying neural network GPT-4 is now used by countless companies to automate tasks. Think of writing texts or automatically replying to e-mails, digital assistants or scanning and assessing application letters. The hype around ChatGPT forced other tech companies to revise or accelerate their AI strategy in order not to fall behind in the race for the latest technology.
At Google, the arrival of ChatGPT led to internal panic, although Google CEO Sundar Pichai preferred to speak of “a sense of urgency”. Teams within Google were urged by the top to hurry up with the development of Bard. When Bard was introduced in February, the chatbot made an embarrassing factual error during the demonstration. Bard, in response to a question, incorrectly stated that the first image of a planet outside our solar system was taken by the James Webb telescope. Pichai later said in a podcast of The New York Times that the first version of Bard felt like Google was “putting a souped-up Civic into a race with more powerful cars.” The latest version of Bard is significantly better, according to the first tests by American journalists. Bard is directly connected to the internet and thus has the most up-to-date information available, while ChatGPT has no data of events after 2021.
Both programs nevertheless suffer from what scientists call ‘hallucinations’: chatbots that make up facts. It is one of the most difficult problems for chatbots to solve, with Google having the advantage that it has permanent access to the entire internet via the search engine. “We can test fairly efficiently whether answers are based on existing sources and whether they are indeed correct,” says Zevenbergen.
Google acknowledges that Bard “sometimes gives incorrect answers” or can make “offensive statements.” It tries a certain bias (bias) by offering users more perspectives. And the more Bard is used, the better he’s going to get at his answers, or so the idea is.
According to Zevenbergen, Google could have launched Bard “two or three years ago”, but the technology is only now reliable enough to experiment with live. “We think hard about the ethical principles behind the kind of systems we want to bring into the world. And speed is not one of them,” says Zevenbergen. “I don’t see this as a race to be the first to bring a technical gadget into the world. Google was not the first search engine, but in the end it was the best.”
Read more about how Europe wants to regulate AI: ‘European Parliament wants strict rules for AI – also for ChatGPT’
Fake news and cybercrime
Yet there is concern – also within Google – about the speed at which AI is developing. The fear is that malicious parties will use AI systems for fake news and cybercrime. There is a risk that the rise of AI will lead to the disappearance of jobs. The creative industry is preparing for a series of lawsuits, because AI companies violate copyright by using texts and images on the internet to train their models.
In recent months, prominent AI scientists have called for a ‘development pause’ of the technology through various petitions. According to the signatories, before companies enter the market with all kinds of AI systems, governments must first draw up rules for how society wants to use AI.
All top executives of tech companies have now spoken out in favor of more government regulation. Top executives of OpenAI and Google have traveled to Brussels in recent months to show that they would like to contribute ideas about the AI Act, Europe’s major AI law that is currently being implemented. The law will determine how tech companies can use AI in Europe.
The big question: to what extent can a company like Google itself bear responsibility for dealing responsibly with a technology that has the potential to have a huge impact on humanity? Half of AI scientists estimate the chance of AI leading to an “extremely bad outcome” for humanity to be at least 10 percent, a study last year showed. a poll. In May, prominent AI scientist Geoffrey Hinton left Google to speak out about the dangers of AI. He stated that AI is now mainly taking over “mind-numbing tasks” from people, “but perhaps much more in the future”.
Zevenbergen states that he ‘misses the nuance’ when it comes to such petitions with concepts such as ‘existential risk’. “Of course, this could pose a risk to humanity. It is very unlikely that it will happen, but it is a scenario that we are taking into account,” he says. “We are now in a situation where we can manage all this properly. Building scenarios that are desirable.”
#Google #launches #chatbot #Ethics #advisor #nuance #risk