1. News
  2. AI
  3. Meta Sues Firm Over Nonconsensual AI Nudify Ads

Meta Sues Firm Over Nonconsensual AI Nudify Ads

featured
Share

Share This Post

or copy the link

Meta has filed a lawsuit against Joy Timeline, a company that promoted generative AI applications on its social media platforms, which allow users to digitally undress individuals without their consent. The legal action follows a CBS News investigation that uncovered numerous advertisements for these digital undressing applications across Meta’s Facebook, Messenger, Instagram, and Threads platforms.

Meta emphasized the importance of this legal move by stating, “This legal action underscores both the seriousness with which we take this abuse and our commitment to doing all we can to protect our community from it. We’ll continue to take the necessary steps – which could include legal action – against those who abuse our platforms like this.”

The lawsuit specifically seeks to prevent the Hong Kong-based firm, Joy Timeline, from advertising its CrushAI nudify applications on Meta’s platforms. The company allegedly made several attempts to bypass Meta’s ad review systems.

This legal challenge follows a CBS News report that highlighted the prevalence of ads for nudify applications on Meta’s platforms. Meta acknowledged the issue and indicated that it had removed several ads, deleted related accounts, and blocked URLs tied to the nude deepfake applications. However, they noted that enforcing their policies is becoming increasingly difficult as innovative generative AI apps constantly find methods to avoid detection. Reports from CBS pointed out that advertisements for AI deepfake nude tools remained available on Instagram, even after some were removed following the investigation.

The advertisements identified in the CBS investigation primarily targeted men aged 18 to 65 in the United States, United Kingdom, and European Union. These apps, designed to digitally “undress” individuals without their consent—most commonly aimed at women and female celebrities—are linked to a rise in blackmail and “sextortion” schemes, with a concerning number of cases involving minors.

404 Media conducted a similar investigation in April 2024, revealing the presence of tools for creating nonconsensual AI deepfakes advertised on Instagram. Following this, both Apple and Google removed certain flagged apps from their app marketplaces. Additionally, in August 2024, San Francisco initiated a lawsuit against 16 prominently visited deepfake websites associated with AI “undressing” applications.

Tamamen Ücretsiz Olarak Bültenimize Abone Olabilirsin

Yeni haberlerden haberdar olmak için fırsatı kaçırma ve ücretsiz e-posta aboneliğini hemen başlat.

Your email address will not be published. Required fields are marked *

Login

To enjoy Technology Newso privileges, log in or create an account now, and it's completely free!