1. News
  2. AI
  3. Meta Shifts Risk Assessments to AI, Sparking Concerns

Meta Shifts Risk Assessments to AI, Sparking Concerns

featured
Share

Share This Post

or copy the link

Meta is reportedly moving towards automating a significant portion of risk assessments for its products and features using artificial intelligence (AI). The social media company, headquartered in Menlo Park, is considering delegating the approval process for new features and product updates—previously overseen solely by human evaluators—to AI systems. This transition is expected to influence the integration of new algorithms, safety features, and the manner in which content is disseminated across its various platforms.

An NPR report indicates that Meta plans to automate up to 90 percent of its internal risk assessments. The report cites company documents that outline this strategic shift.

Historically, any modifications or updates introduced to Instagram, WhatsApp, Facebook, or Threads underwent evaluations by a team of human experts. These reviews scrutinized the potential effects on users, assessed privacy concerns, and considered the dangers posed to minors. Teams conducted privacy and integrity reviews to determine whether new features could provoke misinformation or lead to the proliferation of harmful content.

With the implementation of AI for risk assessments, product teams are expected to gain an “instant decision” following the completion of a questionnaire related to the new feature. The AI will either approve the feature or issue a list of prerequisites that must be met before proceeding. The product teams will be responsible for ensuring they have satisfied these requirements before launching the feature.

According to the report, Meta anticipates that this transition towards AI-driven assessments will significantly enhance the speed at which new features and updates are released, allowing product teams to accelerate their work. Nevertheless, some current and former employees express concerns that achieving this efficiency may compromise thorough scrutiny.

In a statement regarding these developments, Meta conveyed that human reviewers would still evaluate “novel and complex issues,” while AI would be utilized to address lower-risk decisions. However, documents reviewed in the report suggest that Meta’s plans may extend AI’s role to critical areas, including AI safety, youth risk, and integrity—domains that involve matters such as violent content and misinformation.

An anonymous Meta employee involved in product risk assessments shared with NPR that automation initiatives commenced in April and have persisted into May. The employee commented, “I think it’s fairly irresponsible given the intention of why we exist. We provide the human perspective of how things can go wrong.”

This week, Meta published its Integrity Reports for the first quarter of 2025. In the report, the company noted, “We are beginning to see LLMs operating beyond that of human performance for select policy areas.”

Moreover, Meta has begun implementing AI models to expedite the removal of content from review queues in cases where it is “highly confident” that the content does not breach its policies. The company justified this strategy by stating, “This frees up capacity for our reviewers allowing them to prioritise their expertise on content that’s more likely to violate.”

Meta Shifts Risk Assessments to AI, Sparking Concerns
Comment

Tamamen Ücretsiz Olarak Bültenimize Abone Olabilirsin

Yeni haberlerden haberdar olmak için fırsatı kaçırma ve ücretsiz e-posta aboneliğini hemen başlat.

Your email address will not be published. Required fields are marked *

Login

To enjoy Technology Newso privileges, log in or create an account now, and it's completely free!