ChatGPT, an easy-to-use artificial intelligence tool, has attracted hundreds of millions of users since its launch in November and is poised to disrupt industries around the world. In recent days, the AI content generated by the bot has been used in the US Congress, in Colombian courts, and in a speech by the President of Israel. It is inevitable that it will be generalized… but is it ethical?
In a recorded greeting for a cybersecurity convention in Tel Aviv on Wednesday, Israeli President Isaac Herzog began a speech that would make history: “I am truly proud to be the president of a country that is home to such a high-tech industry. vibrant and innovative.In recent decades, Israel has consistently been at the forefront of technological advancement, and our achievements in the fields of cybersecurity, artificial intelligence (AI) and big data They’re really impressive.”
To the surprise of the businessmen attending Global Cybertechthe president then revealed that his comments had been written by the AI bot ChatGPT, making him the first publicly known world leader to use artificial intelligence to write a speech.
But he is not the first politician to do so. A week earlier, US Congressman Jake Auchincloss read a speech also generated by ChatGPT to the plenary session of the House of Representatives. Another first, with the intention of drawing attention to the new AI tool that has been so successful in Congress, “so that we now have a debate on a decisive policy for AI,” Auchincloss told CNN.
AI might become a general purpose technology: foundational in every sector. I delivered the first AI-drafted congressional speech yesterday as a call to action. Policymakers need to be purposeful and proactive. pic.twitter.com/8zyxSR02Y6
— Rep. Jake Auchincloss (@RepAuchincloss) January 26, 2023
Since its launch in November 2022, it is estimated that ChatGPT (created by the Californian company OpenAI) has reached 100 million monthly active users, which makes it the fastest growing consumer app in history.
This easy-to-use AI tool uses online data to generate instant, human-like responses to user queries. Its ability to scan the Internet for information and provide quick answers makes it a potential rival to Google’s search engine, but it is also capable of producing written content on any topic and in any format – from essays, speeches and poems to code. computer – in a matter of seconds.
The tool is free and in January received about 13 million unique visitors per day, according to a report by Swiss banking giant UBS.
Part of its mass appeal is “extremely well engineered: it scales very well with millions of people using it,” says Mirco Musolesi, a professor of computer science at University College London. “But he also has very good training in terms of the quality of the data used as well as how the creators have been able to deal with problematic areas.”
In the past, similar technologies have given rise to bots fed a diet of social media posts adopting an aggressive and offensive tone. This is not the case with ChatGPT, and many of its millions of users use the tool out of curiosity. or entertainment.
“Humans have the idea of being very special, but then we see a machine that is capable of producing something very similar to us,” explains Musolesi. “We knew this was probably possible, but to actually see it is very interesting.”
A “disinformation superspreader”?
However, the potential impact of making such sophisticated AI available to a mass audience for the first time is unclear, and sectors from education to law, science and business are bracing for disruption.
Colleges and institutes around the world have been quick to ban students from using ChatGPT to avoid cheating or plagiarism.
scientific journals as well they have banned the bot figures as co-author in the articles for fear that the errors made by the tool slip into the scientific debate.
OpenAI has warned that the bot can make mistakes. However, a report from ‘NewsGuard’, a media watchdog, states that on issues such as Covid-19, Ukraine and school shootings, ChatGPT made claims “eloquent, false and deceitful” 80% of the time.
“To anyone unfamiliar with the topics covered, the results may appear legitimate and even credible,” says ‘NewsGuard’. This tool is “the next great disinformation disseminator.”
Still, in Colombia a judge announced Tuesday that he used the AI chatbot to help pass judgment in a case about children’s medical rights.
Judge Juan Manuel Padilla told the Colombian station ‘Blu Radio’ that he asked ChatGPT if an autistic minor should be exempted from paying therapy, among other questions.
The bot replied: “Yes, this is correct. According to the regulations in Colombia, minors diagnosed with autism are exempt from paying fees for their therapies.”
Padilla ruled in favor of the boy, as advised by the bot. “By asking questions to the application we do not stop being judges [y] thinking beings,” he told the station. “I suspect a lot of my colleagues are going to jump on board and start ethically building their bugs with the help of artificial intelligence.”
Although he cautioned that the bot should be used as a time-saving facilitator, and not “with the aim of replacing judges,” critics said it was not neither responsible nor ethical use a bot capable of providing misinformation as a legal tool.
A market leader
Despite the potential risks, the spread of ChatGPT seems inevitable. Musolesi hopes it will be used “widely” for both positive and negative purposes: with the risk of misinformation and misuse comes the promise of making information and technology more accessible to more people.
OpenAI received a multi-million dollar investment from Microsoft in January that will allow the integration of ChatGPT in a version premium of the Teams messaging application, offering services such as the automatic generation of meeting notes.
Microsoft has stated that it plans to incorporate ChatGPT technology into all of its products, paving the way for the company to become a leader in this field, ahead of Google parent Alphabet.
The free nature of the tool has been key to its current and future success. “It’s been a great campaign of marketing“, says Musolesi, “and when people use it, it improves the data set that will be used in the next version, because it is providing this feedback.”
Even so, the company this week launched a paid version of the bot that offers access to new features for $20 a month.
Another highly anticipated development is an AI classifier, a software tool to help people identify when text has been generated by artificial intelligence.
Open AI states in a blog that although the tool has been released this week, it is still not “fully reliable”. Currently, it is only capable of correctly identifying texts written by artificial intelligence 26% of the time.
But the company hopes it will improve with training, reducing the potential for “automated disinformation campaigns, the use of AI tools for academic dishonesty, and the positioning of an AI chatbot as a human.”
This article was adapted from its original English version.
#ChatGPT #Chatbots #Congress #Courts #Raises #Ethical #Questions