1. News
  2. INTERNET
  3. AI’s New Economy: Predicting and Manipulating Users

AI’s New Economy: Predicting and Manipulating Users

featured
Share

Share This Post

or copy the link

A recent study conducted by the University of Cambridge suggests that artificial intelligence (AI) tools may soon be able to predict and influence user behavior by leveraging a vast amount of “intent data.” The research indicates the potential emergence of an “intention economy,” which could serve as a marketplace for selling “digital signals of intent” derived from a large user base. These insights could be utilized in numerous ways, from tailoring online advertisements to employing AI chatbots that persuade users to purchase products or services.

The study highlights that popular AI chatbots, including ChatGPT, Gemini, and Copilot, have access to extensive data generated through interactions with users, who frequently share their opinions, preferences, and values. Experts at Cambridge’s Leverhulme Centre for the Future of Intelligence (LCFI) caution that this vast reservoir of information may be exploited in harmful ways in the future.

According to the research paper, an intention economy signifies a new marketplace focused on “digital signals of intent,” wherein AI systems could not only understand but also predict and guide human intentions. The data collected would likely be sold to companies, who would benefit financially from these insights.

The authors posit that the intention economy could replace the current “attention economy” that social media platforms exploit. In the attention economy, the primary goal is to keep users engaged on their platforms while inundating them with targeted advertisements based on their in-app activities, which reveal information about their preferences and behaviors.

In contrast, the intention economy may have a broader and more invasive reach. By directly conversing with users, AI tools might gain deeper insights into their fears, desires, insecurities, and opinions, thereby enhancing their ability to manipulate user behavior.

Dr. Jonnie Penn, a historian of technology at LCFI, emphasized the importance of examining the potential ramifications of such a marketplace on societal values like free and fair elections, independent journalism, and fair market competition. He warned that we must be vigilant about the unintended consequences of this evolving landscape.

The research also suggests that large language models (LLMs) could be trained to utilize extensive “intentional, behavioral, and psychological data” to anticipate and influence users. The paper depicts scenarios where future chatbots might recommend movies by leveraging users’ emotional states. For instance, a chatbot might say, “You mentioned feeling overworked; shall I book you that movie ticket we discussed?”

Additionally, the paper argues that in an intention economy, LLMs could construct psychological profiles of users that might be sold to advertisers. Such profiles could encompass details like users’ cadence, political tendencies, vocabulary, age, gender, and more, allowing advertisers to craft highly tailored online ads that resonate with potential customers.

While the study presents a concerning perspective on the use of private user data in the AI era, it is worth noting that various governments worldwide are actively working to restrict AI companies’ access to such information. Therefore, the actual scenario may not be as grim as suggested by the research.

AI’s New Economy: Predicting and Manipulating Users
Comment

Tamamen Ücretsiz Olarak Bültenimize Abone Olabilirsin

Yeni haberlerden haberdar olmak için fırsatı kaçırma ve ücretsiz e-posta aboneliğini hemen başlat.

Your email address will not be published. Required fields are marked *

Login

To enjoy Technology Newso privileges, log in or create an account now, and it's completely free!