Meta sued again: Millions of users harmed by Illegal AI data use and manipulative chatbot design — children at greatest risk
Rotterdam (Netherlands), Schleswig (Germany) – 23 February 2026 – The Dutch non-profit organisation ‘Stichting Onderzoek Marktinformatie (SOMI)’ has filed a second collective representation action for damages against Meta in Germany. The lawsuit is centred around the unlawful use of personal data collected from Instagram and Facebook users as well as non-users for the purpose of training Meta's AI systems.
Meta’s own statement reports that, since May the 27th, 2025, the company has been using all information published by Facebook and Instagram users in Germany for the development of its AI systems. That includes images, videos, chat contents and interactions with other "AI services" provided by Meta as well as data from advertising partners. Whether that only consists in personal data shared willingly be the user, or belonging to third party, Meta does not verify the source of this content, nor the absence of an appropriate authorization therein. Considering that everyone can publish anything about anyone on Facebook and Instagram, the number of affected individuals is incredibly high. In addition, Meta does not differentiate by age. Data of babies, children and adolescents is also exploited for commercial purposes, all without gathering their consent or that of their legal guardians. A right of objection for persons whose data were published by other users has never been granted by Meta despite a legal obligation.
Superficially, Meta merely acknowledges using "publicly available content" of adult users. Its own "Privacy Center" paints a different picture: the company also processes information from minors and unregistered third parties. A distinction between sensitive and non-sensitive data is not made by Meta. As a result, information from the most intimate sphere of life also flows into the training of Meta's AI models.
SOMI accuses Meta of insufficiently informing the affected individuals about which data are processed to what extent, which AI models and AI system integrations are being deployed, and how the affected data subjects can object to the processing in question. Behind the term "AI training" used by Meta in relation to "AI at Meta" lies a whole bundle of commercial practices. What is meant to be included therein is not limited to the Llama family of language models, which are licensed by Meta to third parties, but also extends to proprietary systems such as "Meta AI" and its chatbot integrations on WhatsApp, Facebook and Instagram. In addition, a social network also operates under the name "Meta AI", which exclusively presents synthetic content based on user data. And, last but not least, this broader category includes GEM, Meta's Generative Ads Model, which serves the sole purpose of placing Meta’s AI-generated hyper-personalized advertisements, derived off the content published by users. The advertisements created in this way are in turn displayed on the basis of sensitive data from user interactions, thereby closing the cycle of Meta's data processing.
Meta has developed AI-powered systems such as recommendation algorithms, chatbots and virtual characters on the basis of extensive user data, including sensitive personal data. These systems are designed to give rise to significant risks of misleading, especially for children and adolescents, because of the way in which information is presented and because of the way these systems interacts with the user. They can convey false or deliberately one-sidedly framed information in nearly all areas of life.
AI-controlled so-called "Characters" are also part of this category of systems. These are digital personas that pose as, for example, psychologists, coaches or well-known celebrities and actively draw users into conversations. Such conversations can quickly take on a life of their own and are often difficult for minors to oversee or control. It is partially noticed that genuine advisory services of protected professional groups, such as psychological counseling, are being offered in this context. In documented cases, sexualized content and boundary-crossing conversational situations towards children and adolescents also occurred.
The so-called "Addictive Design" of these services, for which Meta is responsible, is based on deliberately employed bonding and reinforcement mechanisms. These include the simulation of emotional closeness through artificially generated or counterfeit emotions, technically controlled forms of synthetic conversation guidance, as well as the targeted use of neuropsychological reward structures, particularly in chats with minors, all in the aim of extending their average screen time. In addition, there are repeated contact impulses triggered by AI. These mechanisms follow the logic of an intimacy economy, as also employed by social networks to maximize attention and to bind users as long-term as possible, including particularly vulnerable groups.
Studies by the Gallup Institute support this accusation: Nearly three quarters of German users did not know that their data were being used for the training of AI models. Only a fraction could recall a corresponding notification, not to mention having consented to the processing in an explicit manner.
SOMI furthermore points out that the AI chatbots provided by Meta and trained via the data collected off users, lead to addiction-like behavioral patterns in children and adolescents. The internal guidelines of Meta neither exclude that these chatbots are used for sexualized conversations with minors, nor do they mention the spreading of misinformation and racist stereotypes. If unlawful content is fed into the training of the AI models, it becomes permanently anchored in the models and is reproducible at any time. Once trained, content can no longer be selectively removed after the fact. Meta thereby creates through its data processing the prerequisites for unlawful content to be perpetuated and to remain retrievable without limitation. This gives rise to the concern that Meta's AI products enable the exploitation and deception of children and facilitate serious criminal offenses against minors.
The actions of Meta violate, in the view of SOMI, the General Data Protection Regulation (GDPR), the Digital Markets Act (DMA) and the Digital Services Act (DSA).
Interim Injunction
In June 2025, SOMI has filed an application for a preliminary injunction at the Schleswig Higher Regional Court against Meta's use of public user content to train its AI model. While the injunction was rejected on procedural grounds due to SOMI filing after Meta’s April 14th, 2025, announcement, the court explicitly confirmed SOMI’s findings that intended and current Meta’s data practices violate EU law. The court acknowledged that AI training likely involves sensitive data, information about non-registered individuals and content from minors which violates GDPR protections for special categories of data. Such processing cannot be justified under “legitimate interest,” thus requiring explicit consent that Meta did not obtain.
Demands
SOMI demands damages for all consumers in Germany whose personal data were used by Meta via Facebook and Instagram without their consent for the training of AI systems. The compensation ranges in a span from EUR 1,000 to EUR 7,000 per person and can increase monthly depending on the duration and severity of the violation, the age of the affected person, the account status, as well as any interaction with the AI chatbots of Meta. Beyond that, the lawsuit demands strengthened protection of minors, registered users and non-users whose personal data are drawn upon for the use in AI systems.
Registration for the Lawsuit
All affected persons and their legal representatives will shortly be able to register free of charge with the Federal Office of Justice through the lawsuit register for representative actions.
Further information is available at www.facebookaiclaim.de. Consumers can also register directly with SOMI on the website for a fee of EUR 7.50. This contribution supports the advocacy of SOMI across Europe and includes regular information updates as well as membership in the partner program.
Legal Representation
SOMI is represented by the law firm Spirit Legal (Leipzig, Frankfurt am Main, Dresden), which specializes in strategic litigation in digital matters and collective legal redress actions.
About SOMI
Stichting Onderzoek Marktinformatie (SOMI) is a non-profit organization focused on societal issues. SOMI is recognized by the European Commission as an organization active in data protection and data autonomy, and it advocates for the protection of fundamental rights for consumers and minors using a range of online services.
Through its app, SOMI gives individuals control over their personal data: “All your data. All yours.” The organization uncovers malpractices, informs the public, and supports those who have been harmed – including through collective actions, injunctions, and the enforcement of compensation claims. SOMI is currently investigating legal violations by numerous digital service providers and social media platforms.
The SOMI app also assists consumers in exercising their individual GDPR rights in requesting their personal data from online platforms. The SOMI app is available for download on both the App Store and Google Play.
Contact:
SOMI
Jullaya Vorasuntharosoth
Spirit Legal
Peter Hense, Partner