1. News
  2. AI
  3. AI Tools May Manipulate User Intentions, Study Warns

AI Tools May Manipulate User Intentions, Study Warns

featured
Share

Share This Post

or copy the link

According to a recent study by the University of Cambridge, artificial intelligence (AI) tools may soon possess the capability to anticipate and influence user behaviors by leveraging extensive “intent data.” The research suggests that this could pave the way for an “intention economy,” creating a market for trading “digital signals of intent” from a vast user demographic. Such data has potential applications ranging from generating personalized online advertisements to employing AI chatbots to persuade users into making purchases.

The research acknowledges the considerable datasets that AI chatbots like ChatGPT, Gemini, and Copilot gather from user interactions. Many individuals share their preferences, opinions, and values with these platforms. Experts at Cambridge’s Leverhulme Centre for the Future of Intelligence (LCFI) assert that this wealth of information could be exploited in harmful ways in the near future.

The concept of an intention economy is primarily characterized as a new marketplace for interpreting and responding to digital signals of intent. Researchers argue that AI tools could discern, predict, and actively guide human intentions, with these insights potentially being sold to companies seeking to capitalize on them.

The researchers propose that the intention economy will replace the current “attention economy” exploited by social media networks, where the focus is on keeping users engaged to serve them a high volume of advertisements. In this existing model, ads are tailored based on user activities, revealing insights into their preferences and behaviors.

In contrast, the intention economy may extend far beyond previous models, as it would utilize direct dialogue with users to understand their fears, desires, and opinions, according to the paper.

Dr. Jonnie Penn, a Technology Historian at LCFI, emphasized the importance of contemplating the potential ramifications of such a marketplace on crucial societal elements. He cautioned against the potential negative impacts on human rights, including free elections, a free press, and fair market competition, urging caution before society falls prey to unintended consequences.

The study also indicates that large language models (LLMs) could harness substantial “intentional, behavioral, and psychological data” to predict and manipulate user preferences. For example, future chatbots may suggest movies based on emotional cues, such as recommending a ticket by referencing a user’s expressed fatigue: “You mentioned feeling overworked; shall I book you that movie ticket we discussed?”

Furthermore, the paper proposes that within an intention economy, LLMs could create psychological profiles of users and subsequently market this information to advertisers. Details such as a user’s communication style, political biases, vocabulary, demographics, and preferences could streamline the creation of highly tailored advertisements.

While the study presents a concerning view of how personal data could be harnessed in the AI era, it is worth noting that many governments worldwide are actively working to restrict AI companies’ access to such data, potentially signaling a more optimistic outcome than the one forecasted by the research.

AI Tools May Manipulate User Intentions, Study Warns
Comment

Tamamen Ücretsiz Olarak Bültenimize Abone Olabilirsin

Yeni haberlerden haberdar olmak için fırsatı kaçırma ve ücretsiz e-posta aboneliğini hemen başlat.

Your email address will not be published. Required fields are marked *

Login

To enjoy Technology Newso privileges, log in or create an account now, and it's completely free!