Injunction: X (formerly Twitter) in France
SOMI has filed a preliminary injunction against X (formerly Twitter) at the Paris Court of First Instance (Le Tribunal Judiciaire de Paris) in France for the alleged breaches of the General Data Protection Regulation (GDPR), the Digital Services Act (DSA) and the Unfair Commercial Practices Directive (UCPD) as transposed into French law.
What we demand
Verdict
The verdict will be published here once the case has reached a conclusion.
What is the claim about?
Under Article 9(2) of the GDPR, processing sensitive personal data is generally forbidden unless a clear exception applies, such as explicit, informed consent from the user. Valid consent requires users to be fully aware of what data is collected, how it will be used, and for what purposes before agreeing. In X’s case, there is no evidence that users consented to their sensitive data being used for targeted advertising. The platform cannot assume such information is public simply because it was inferred from user activity. X’s privacy policy also fails to clearly disclose that sensitive data may influence ad targeting, leaving users unaware of how their most personal information is exploited.
X collects sensitive data, including information that may reveal users’ political opinions and religious beliefs, by tracking behaviour on its platform. It monitors clicks, likes, replies, and other interactions, then combines this data with information gathered from users across external websites and apps through advertising partners and affiliates. These combined datasets allow X to build detailed user profiles that infer personal interests and characteristics, although its privacy policy refers to this process only indirectly. The resulting profiles are used to tailor highly personalised advertisements to specific audiences, a practice known as microtargeting, which relies on analysing sensitive personal data to reach precisely defined user groups.
Originally, X displayed posts in simple chronological order, meaning users saw content based solely on when it was published. This system was later replaced by an algorithmic feed that now determines which posts are shown, in what sequence, and how visible they are, giving the platform significant control over information exposure. Research has linked this shift, particularly after the company’s change in ownership, to a marked rise in hate speech and harmful content. Advertisements have reportedly appeared alongside such material, raising concerns about monetisation. At the same time, reduced transparency and legal pressure on critics have made it harder to study, monitor, and address these risks.