Meta relies on AI to assess privacy risks: 90% of updates are automated | Meta AI WhatsApp | Meta login | Meta AI APK | Turtles AI
Meta is deploying an AI-powered system to automate up to 90% of risk and privacy assessments for updates to its core apps, such as Instagram and WhatsApp, raising questions about transparency and accountability.
Key Points:
- Assessment automation: Meta plans to use AI to automate up to 90% of risk and privacy assessments for updates to its core apps.
- Streamlined process: Product teams will fill out a questionnaire and receive an immediate decision with the AI-identified risks and requirements to address before launch.
- Security concerns: Experts and former executives raise concerns about AI’s ability to prevent negative externalities and keep users safe.
- Regulatory compliance: Meta says it is investing in privacy and improving processes to better identify risks, simplify decision-making and improve user experience, leveraging technology for low-risk decisions and relying on human expertise for complex assessments.
Meta Platforms is deploying an AI-powered system to automate up to 90 percent of the risk and privacy assessments for updates to its core apps, including Instagram, WhatsApp and Facebook. The approach involves product teams completing a detailed questionnaire about their work, then receiving an immediate decision from the AI about potential risks and requirements to address before new features are released, according to internal documents seen by NPR. The stated goal is to speed up the review process, improving efficiency and consistency in low-risk decisions.
But the move has raised concerns among experts and former executives at the company that relying on AI could reduce its ability to prevent unwanted side effects and compromise user safety. In response, Meta said it has invested more than $8 billion in its privacy program and is committed to delivering innovative products while meeting regulatory requirements. The company emphasizes that while AI will handle low-risk decisions, complex assessments will continue to be left to human experts.
This development is part of a broader context of growing attention to AI regulation. The European AI Regulation (AI Act), which came into force in February 2025, requires companies to assess and manage the risks associated with AI systems, classifying them by level of risk and providing specific obligations for each category. In addition, international standards such as ISO/IEC 42001:2023 and ISO/IEC 23894:2023 provide guidelines for risk management and impact assessment of AI systems.
In parallel, Meta has faced criticism for its use of user data in training its AI models. In June 2024, the company updated its privacy policy to inform users that their public data could be used to develop and improve AI, based on the principle of legitimate interest. This decision led to complaints from the organization noyb in eleven European countries, raising questions about transparency and control of personal data.
In response to growing concerns, Meta introduced the “Frontier AI Framework,” a policy that distinguishes between high-risk and critical-risk AI systems, requiring enhanced safeguards and, in some cases, the suspension of development until risks are mitigated. This initiative seeks to balance technological innovation with user accountability and safety.
Meta’s increasing automation of risk assessments highlights the need for a balance between operational efficiency and protection of user rights in an ever-changing technological and regulatory landscape.