Meta plans to let AI take over 90% product risk assessment, replacing manual review, sources say

June 1 (Bloomberg) -- According to NPR, social media giant Meta is planning to shift the task of assessing the potential risk of its products from manual review to a greater reliance on artificial intelligence (AI) to speed up the review process. Internal documents show that Meta aims to have as much as 90% of its risk assessment work done by AI, even in areas involving youth risk and "integrity," which covers violent content, disinformation, and more. However, current and former Meta employees who spoke with NPR warned that the AI could overlook some serious risks that a human team could identify.

Meta plans to let AI take over 90% product risk assessment, replacing manual review, sources say

Meta's longstanding portfolio includes Instagram and WhatsApp Updates and new features to the platform, including those on Meta, are manually vetted before being rolled out to the public. But over the past two months, Meta has reportedly increased its use of AI significantly. Today, product teams are required to fill out a questionnaire about their product and submit it to the AI system for review. The system typically provides "on-the-fly decisions," pointing out areas of risk it has identified. The product team then needs to address the AI's requirements before the product can be released.

A former Meta executive said that reducing review efforts means "you're creating higher risk. Negative externalities from product changes are much less likely to be stopped before they start causing problems." In a statement, Meta responded that the company would still use "human expertise" to evaluate "novel and complex problems" and leave "low-risk decisions" to AI. Meta responded in a statement that the company would still use "human expertise" to evaluate "novel and complex problems" and leave "low-risk decisions" to AI.

1AI notes that Meta published its latest quarterly integrity report a few days ago, the first since the company changed its content review and fact-checking policies earlier this year. The report shows that the amount of content removed following the policy change dropped as expected, but bullying and harassment, as well as violent and gory content, rose slightly.

statement:The content of the source of public various media platforms, if the inclusion of the content violates your rights and interests, please contact the mailbox, this site will be the first time to deal with.
Information

Google DeepMind's Strongest AI Sign Language Translation Model: SignGemma Debuts, Breaking Sign Language Communication Barriers

2025-6-1 14:14:42

Information

Google Quietly Launches "AI Edge Gallery" App: Runs AI Models Locally on Your Phone

2025-6-1 14:17:36

Search