An artificial intelligence (AI) system could predict your decisions before you think about making them; even sell the information to companies and governments. With current technology, that scenario seems remote, but AI ethicists already warn of a future where the ‘intent economy’ will call into question the validity of people’s decisions.
Some researchers from the University of Cambridge, United Kingdom, compiled their thoughts on the new concept of the intention economy and how, from now on, the pieces of the board are in place for the industry to start smoothly. Experts at Cambridge’s Leverhulme Center for the Future of Intelligence (LCFI) say the lucrative user intent market will range from mundane things like buying a movie ticket to a presidential election.
Predict, manipulate and direct the user
The sale of data to predict mass behavior already has important precedents. In 2016, Facebook’s misuse of personal data helped build specific profiles of online communities to generate accurate political campaigns and thus influence the results of the US elections. In that scandal carried out by the company Cambridge Analytica, it was the years of interactions and content shared on the social network that enriched the psychological profiles of the users.
What will distinguish the past scenarios of the intention economy of the future is the massification of artificial intelligence agents such as ChatGPT, Copilot or Gemini. Those who use AI assistants and agents tend to share more private data through casual conversations under the assumption that no one reads it. For now, they are “niche” services, a mere eccentricity that only the most technology enthusiasts use. But little by little more specialized and casual agents emerge, such as those who pretend to be a friend, a fictional character or a romantic partner.
For researchers, what people say when they talk, the tone they use and the context of the conversation is more intimate and exploitable than any Facebook or Instagram record. Timely analysis with more sophisticated AI of a user’s conversation logs chatbot will yield irresistible data for the intention economy.
“This AI will combine knowledge of our online habits with an uncanny ability to tune into us in ways we find comforting, mimicking personalities and anticipating desired responses, to build levels of trust and understanding that enable social manipulation on an industrial scale.” , points out the article from the University of Cambridge.
For doctors Jonnie Penn and Yaqub Chaudhary, both from LCFI, there are already signs that companies are preparing for these scenarios. The fact that huge sums of money are being spent to position free AI agents in most everyday activities should be reason enough to question the companies’ intentions. The two recently published a more detailed article, titled ‘Beware of the Intention Economy’, in the scientific journal Harvard Data Science Review.
“We note that AI tools are already being developed to elicit, infer, collect, record, understand, forecast, and ultimately manipulate and commodify human plans and purposes,” Chaudhary said.
The authors urge that the use of information generated from an interaction with an AI agent be regulated from now on. If not done, in a few years there will be a new gold rush for those actively seeking to influence the decisions of specific communities and people.
#possibility #predicting #manipulating #decisions #closer #imagine