On Wednesday, Meta announced that it had discovered what it described as “likely AI-generated” content being used in a misleading manner on its Facebook and Instagram platforms. This content included comments that praised Israel’s approach to the ongoing conflict in Gaza and appeared beneath posts from prominent global news organizations and U.S. lawmakers.
In its quarterly security report, Meta detailed that the accounts behind this campaign pretended to be individuals such as Jewish students and African Americans, aiming to reach audiences in the United States and Canada. The company linked the operation to STOIC, a political marketing firm based in Tel Aviv.
A request for comment from STOIC regarding the allegations went unanswered.
Significance of the Findings
This marks the first report from Meta to identify the use of text-based generative AI technology in influence operations since the technology gained prominence in late 2022. Prior to this, Meta had identified the use of basic AI-generated profile photos in similar efforts since 2019.
Concerns have grown among researchers about the potential of generative AI to facilitate effective disinformation campaigns, particularly with the power to produce realistic text, imagery, and audio in a rapid and cost-effective manner.
During a press briefing, Meta security officials indicated that they had removed the Israeli campaign promptly and did not believe emerging AI technologies hindered their ability to disrupt organized messaging efforts. They stated that they had not observed networks deploying AI-generated images of political figures that were convincing enough to be mistaken for genuine photographs.
Noteworthy Remarks
Mike Dvilyanski, Meta’s head of threat investigations, commented, “There are several examples across these networks of how they use likely generative AI tooling to create content. Perhaps it gives them the ability to do that quicker or to do that with more volume. But it hasn’t really impacted our ability to detect them.”
Statistical Overview
The report detailed six covert influence operations that Meta disrupted during the first quarter, including the network linked to STOIC. Additionally, the company dismantled another network based in Iran that targeted discussions around the Israel-Hamas conflict, although there was no evidence of generative AI being utilized in that instance.
Contextual Background
Major tech companies, including Meta, have been facing challenges regarding the potential misuse of new AI technologies, especially in the context of elections. Researchers have found instances where image generators from companies like OpenAI and Microsoft produced misleading images related to voting, despite the existence of policies against such actions.
These companies have implemented digital labeling systems intended to indicate AI-generated content at the time of creation, though these measures do not extend to text, leading researchers to question their overall effectiveness.
Looking Ahead
Meta’s capabilities will undergo critical evaluations as elections approach in the European Union in early June and in the United States in November.
© Thomson Reuters 2024