Meta and AI Training: The use of User Data on Facebook and Instagram from 27.05.2025

On May 27, 2025, Meta (the Operator of Facebook, Instagram, WhatsApp, and Threads) officially commenced the use of the personal data of European users for the training of its generative AI models. This followed extensive coordination with the Irish Data Protection Commission (DPC), which approved Meta’s updated approach. Previously, the European Commission had halted Meta’s original plans last year, primarily due to concerns over transparency and a lack of user consent.

In preparation for this launch, Meta notified all users within the EU and EEA from early May 2025 onward via in-app notifications about the intended data processing. An updated opt-out mechanism was also provided. Users who did not object to the use of their publicly accessible content, such as posts, comments, and profile information, by May 26, 2025, now have their data incorporated into the AI training systems.

Meta’s approach relies on an opt-out mechanism rather than obtaining explicit opt-in consent for including personal data in AI training. Users were not required to actively consent; instead, they could object within a specified timeframe. Similar procedures are known in other contexts, such as the electronic health records system in German healthcare. Public reactions to this method have so far been relatively muted. However, its legal and ethical evaluation remains subject to ongoing professional and societal debate.

How to File an Objection After May 27, 2025

Objections can be submitted through the Facebook or Instagram app and apply to all profiles linked to the respective account, eliminating the need for multiple submissions.

Step-by-step guide:

Facebook:

  1. Tap the profile picture at the bottom right.
  2. Select the gear icon (Settings) at the top left.
  3. Access the “Privacy Policy.”
  4. Click the link “(learn what this means for your rights).”
  5. In the pop-up window, select “Object.”
  6. On the following page—“Right to Object”—click “Object Now.”
  7. Select the relevant product and click it (e.g., Facebook, Instagram, Threads, Meta Quest, AI Glasses, Meta AI app).
  8. Tap “How can I object to the processing of my information?”
  9. Then select “I want to object to the use of my information for Meta AI.”
  10. On the next screen, simply click “Send” (providing a reason is optional).
  11. A confirmation email acknowledging receipt of the objection will follow.

Instagram:

  1. Tap the profile picture at the bottom right.
  2. Open the menu (three horizontal lines) at the top left.
  3. Select “Privacy Center.”
  4. Click “Object to Purposes.”
  5. On the following screen, click “Send” (a reason is optional).
  6. A confirmation email will be sent.

General Form:

It is also possible to object without being a Meta community member. Individuals without active Facebook or Instagram accounts may be affected if their data appears in publicly shared content processed by others. The designated objection form is available at:
https://www.facebook.com/help/contact/510058597920541

What Happens to Your Data After Objection?

“…your data will no longer be used for the future development and improvement of generative AI models at Meta.”

This means that data already incorporated into existing models prior to objection remains embedded and is not removed. Users who objected before May 26, 2025, effectively prevented their publicly available content from being included in AI training. A subsequent objection does not operate retroactively but prohibits further use of the data and ensures that future public content from those users is excluded from AI training. Therefore, objecting even after the deadline remains meaningful.

What Are Generative AI Models?

Generative AI models are computational systems trained on billions of data points—images, text, and audio. By analyzing and learning from this input, these models infer relationships and patterns across diverse content, enabling them to generate new outputs on demand, akin to ChatGPT. Examples include:

  • Large Language Models (LLMs) understand and produce natural language text (e.g., chatbots, translations, summaries).
  • Image generation models that create new images from textual descriptions.
  • Multimodal systems that process and integrate different input types (e.g., text plus images).

What Data Does Meta Use for AI Training?

Meta states that it uses publicly accessible information from users aged 18 and over, including:

  • Always publicly viewable information (not private chats), such as usernames, profile names, activities in public groups, comments, ratings, reviews, and avatars.
  • Optionally, public content, such as posts, photos, and videos.
  • Interaction data related to the use of Meta AI features.

Indirect impact on individuals without Meta accounts may occur if their identifiable data (e.g., photos or comments) are publicly shared by others. Uncertainty remains regarding the use of sensitive or private data, such as contact information. Meta indicates such data may be used, for example, to model sensitive topics free from discrimination.

What Happens to the Trained Models?

These AI models, trained on the data, are deployed globally for private and commercial users via various offerings, including:

  • Meta AI (chat functions, creative tools),
  • AI-powered creator tools,
  • Open platforms for research and development.

Thus, the database is used beyond internal purposes and is also accessible to third parties.

Meta relies on legitimate interest under data protection law for processing personal data in AI training. The Higher Regional Court of Cologne (OLG Köln) preliminarily affirmed on May 23, 2025 (Az. 15 UKl 2/25) that Meta’s legitimate interest in data usage outweighs affected users’ rights to informational self-determination. The court did not find breaches of the GDPR or the Digital Markets Act (DMA). This interlocutory judgment is not final; the main proceedings remain pending.

Since Meta has already initiated training and claims data cannot be removed from models once incorporated, a contrary final ruling could necessitate the deletion of the relevant AI models.

Against the backdrop of vast personal data accumulation since social media’s inception, questions arise about the propriety of invoking legitimate interest, particularly given that many profiles were created by minors, whose data is now retroactively exploited for AI training. A more privacy-protective approach might have entailed using solely data published under the new regulatory framework from May 26, 2025, onward.

The OLG Köln judgment did not address these fundamental concerns, although it is undisputed that data integrated into AI models cannot be individually expunged. This raises profound issues regarding privacy, autonomy, and control over personal information.

The Regional Court of Cologne’s ruling dated May 15, 2025, in the Netflix case (Az. 6 S 114/23) provides guidance on judicial assessment of unilateral contractual modifications. The court held that a contractual change communicated solely via a pop-up window with a simple “accept” button does not constitute a valid contractual offer. Even embedding such clauses in standard terms and conditions, coupled with a special termination right, failed to satisfy the requirements of Section 307 of the German Civil Code (BGB). Netflix was consequently ordered to refund excessive payments.

Applying this legal standard to Meta’s practice reveals parallels, but with arguably weaker disclosure. Meta primarily notified users through in-app alerts visually indistinguishable from numerous routine notifications, whose contractual significance may not have been readily apparent. A separate, proactive communication, such as an email, was absent.

This raises the question of whether such a covert notification suffices as a lawful basis for a contractual amendment authorizing Meta to process personal data for AI training without explicit consent. Such unilateral modification could grant Meta an economic advantage through personal data exploitation, absent active user objection.

From a legal standpoint, it must be examined whether this approach complies with principles of transparency, voluntariness, and effectiveness in contract modifications. The Netflix ruling illustrates that technical feasibility alone does not guarantee legal permissibility.


A Moment to Reflect

The notion that data, once embedded in an AI model, cannot be removed is irrevocable. It constitutes a digital imprint that remains despite changes in personal circumstances or legal frameworks. This permanence provokes critical reflections on control, accountability, and the relationship between individuals and technology.

What happens when scattered data points coalesce into a coherent portrait—not just any portrait, but one revealing character traits, preferences, or thought patterns? AI’s capacity to synthesize complex profiles from disparate signals is simultaneously captivating and disquieting. Is this still mere analysis, or has it evolved into automated interpretation of our digital selves?

Moreover, a new boundary emerges: between reality and technologically fabricated content. Voices, faces, even entire scenarios can now be convincingly simulated, often requiring only seconds of source material, creating what appears authentic but is synthetic. What does this mean for trust, evidentiary integrity, and identity?

Power dynamics also shift. Concentration of vast data and processing capabilities in a few hands creates new forms of influence, not necessarily malicious, but impactful on social structures. In a world where information is a resource, access to data increasingly shapes participation and governance within society.

Perhaps the crucial question lies not in singular risks but in the accumulation of myriad uncertainties. Now may be the time to pause, not out of fear, but to deliberately determine the societal relationship with a technology that reflects, understands, and potentially shapes us.