Back to main
Lukasz Olejnik
Security, Privacy & Tech Inquiries

Did the EU top court prohibit websites with user-generated content and anonymous users?

The recent judgment has the potential to remake the internet and websites as we know them in Europe (unless it's ignored, modified, or interpreted rationally). Let's drop the big question upfront: did the EU top court (Grand Chamber) just prohibited websites with user-generated content, or anonymous posts to forums?

An EU Court of Justice judgment (C-492/23) turns platform architecture into a huge compliance question. The context:

  • an anonymous user posted an ad with a real woman's photos and phone number, falsely presenting her as offering sexual services.
  • The marketplace removed it within an hour of notification.
  • By then the ad had been scraped and republished across multiple sites where it remains accessible despite the original takedown.

Most platforms would call this successful moderation, fast response, content removed, case closed. The Court saw it differently, as the foreseeable outcome of system design choices (specifically, that the website had content modifiable by external parties):

  • The marketplace's terms reserved the right to copy, distribute and transfer published content to partners.
  • Its infrastructure allowed anonymous posting without identity verification.
  • Its security model did not prevent scraping of sensitive ads.

These were design decisions, and under GDPR, design decisions about personal data make the site... a controller. The judgment converts abstract GDPR principles like accountability, data protection by design, security of processing into concrete engineering requirements that apply before publication, not just after complaint. For platforms accustomed to reactive content moderation backed by Digital Services Act safe harbours, this is a fundamental reframing: platforms and websites cannot moderate their way out of controller obligations they should have designed for in the first place. This may mean that some sites may be unable to operate (economically unjustified).

What becomes difficult to defend after this judgment:

  • Anonymous posting in categories where sensitive data about real people is predictable
  • Sites that accept phone numbers, faces, and intimate claims with no identity checks
  • Relying on fast takedown as the main safety mechanism
  • ignoring scraping and syndication risk for high-risk content

What the decision does

The decision starts from a simple point. A marketplace that structures, presents and monetises user ads is not just storing content for others. It decides what data must be entered, which categories exist, how ads are ranked and promoted, how long they stay up, and how content can be shared with partners. That means it determines the purposes and means of the processing within the meaning of the GDPR definition of controller. The user/client who posts the ad is one controller. The marketplace is another. Publishing an ad on the platform is joint processing under Article 26 GDPR.

In this specific case, the content of the ad is treated as sensitive personal data under Article 9. That is important only as a trigger. When sensitive data are concerned, simple notice and takedown are no longer enough. The decision uses three core provisions and translates them into concrete requirements. Article 5(2) on accountability and Article 24 say that the operator must be able to show that its processing complies with GDPR and that it has taken appropriate steps to prevent misuses. Art. 25 on data protection by design and by default requires that when deciding how the system will work and while it is running, the operator builds in technical and organisational measures that implement the data protection principles and that by default, do not expose personal data to an unlimited audience without the individual's intervention. Article 32 on security of processing requires measures that are appropriate to the risks such as unauthorised access, disclosure or loss of control. In this case the risk is that once a sensitive ad appears on one site, it is scraped or syndicated and reappears on other sites where the individual cannot get it removed.

Technical consequences

Three pre-publication requirements for marketplaces (or websites in general) where ads contain or are likely to contain sensitive data.

  • detect such ads before they are made public using appropriate technical and organisational tools
  • collect and meaningfully verify the advertiser's identity and check whether that person is the one described in the ad
  • do not publish the ad unless the advertiser can prove a valid reason under to process sensitive data (like consent).
  • Article 32 is applied to copying. The operator must have security measures specifically aimed at preventing or clearly reducing automated scraping and unlawful redistribution of high risk ads.

Now, there is no demand for zero risk, but there must be a clear security model and concrete controls. The problem is that it's very difficult to define what applies in which circumstances. One may also take this judgment further as applying to any websites with user generated content. Naturally, placing such requirements on every website would be absurd. But then, how does one know when such requirements start applying to them? There is no such a way. A litmus test would perhaps be a decision that it always apply to websites with user-generated content that structure, rank, and monetise publishing. That would still apply to basically most websites out there.

Technical impact 2

User ads on large platforms become a regulated processing under GDPR. This has some consequences.

Ad pipeline design. Ad flows need built in checks for sensitive content. That means at least basic classification logic that flags high risk ads involving personal information about identifiable people. This can combine category choices, keyword and pattern checks and simple image marks. In practice, fully proofing against this risk is technically infeasible. The point is not perfect detection. The point is that the system has a path that marks certain ads as high risk and stops them from going live without further checks. In principle AI classification could speed this up, but it would not work 100% and would generate false positives that block legitimate ads and frustrate human users. This point may now also apply to all information, not just ads.

Identity verification and consent. Platforms need stronger advertiser verification. For a sensitive ad clearly about a specific person the system should at least check if the advertiser plausibly is that person. If not, the default outcome is refusal to publish. The default is non-publication of sensitive data to an unlimited audience unless the conditions are met. This point would make it impossible to post ads (or perhaps even other content?) for unvetted people. When taking it beyond ad-only realm, it ends with anonymity on the web. Hopefully the judgment would not be used in such a way

Security and scraping. Scraping and re-use of ads need to be treated as real threats in the security model, especially for sensitive categories. Reasonable imaginable tools could include: rate limiting, automated client detection, controls on feeds and APIs that carry sensitive categories. The operator must be able to explain why the chosen controls are appropriate given its scale, its technical capabilities and the potential impact of a single fake sensitive ad being propagated across other sites.

Governance and liability. Platforms that structure and monetise user ads must treat themselves as controllers for that processing. That affects records of processing, impact assessments, internal policies and how they handle incidents.

Why this matters beyond classified ads

Now again, the elephant in the room would be if the judgment's reach is not limited to traditional classified ad marketplaces. It may well extend to any platform where users can publish data about other identifiable people. That includes user profile platforms, review sites with photos or full names, dating apps, forums with user signatures or avatars, community platforms with member directories - any service that allows one user to tag, mention, or describe another person using personal data. That would mean that the following from the judgment:

In any event, the operator of an online marketplace cannot avoid its liability, as controller of personal data, on the ground that it has not itself determined the content of the advertisement at issue published on that marketplace. Indeed, to exclude such an operator from the definition of ‘controller’ on that ground alone would be contrary not only to the clear wording, but also the objective, of Article 4(7) of the GDPR, which is to ensure effective and complete protection of data subjects by means of a broad definition of the concept of ‘controller’.

Actually was this: In any event, the operator of an online website, service or application cannot avoid its liability, as controller of personal data, on the ground that it has not itself determined the content of the displayed information at issue published on that platform. Indeed, to exclude such an operator from the definition of ‘controller’ on that ground alone would be contrary not only to the clear wording, but also the objective, of Article 4(7) of the GDPR, which is to ensure effective and complete protection of data subjects by means of a broad definition of the concept of ‘controller’.

Because why not.

The logic: if the platform structures how that data is entered, presented, and made accessible, and if it monetises or otherwise exploits that structure, it is a controller. The classified ad marketplace in this case would just be the clearest example of a broader principle. Platforms that today allow users to post photos, names, contact details, or other personal data about third parties could now be operating under a model that this judgment directly challenges.

Digital Services Act safe harbours

Another elephant in the room is the impact on the Digital Services Act which keeps the classic safe harbour idea for caching and hosting. A provider that does not know about illegal content and that acts quickly once notified can be protected from certain kinds of content liability. Many marketplaces have read this as a general promise that they can host whatever users upload as long as they respond to notices. According to the judgment, this no longer holds. In the case the marketplace had no knowledge of the fake sexual ad before the complaint and did remove it quickly afterwards. By DSA standards that looks like textbook hosting behaviour. Yet the operator is treated as having failed in its GDPR duties.

The practical message is that DSA safe harbours and GDPR controller duties are two separate things.

Entity relative personal data proposal

There is a proposal to change the definition of GDPR personal data so that identifiability is evaluated per entity. Would that immediately invalidate such a judgment? No. The marketplace could still be said to be able to easily link the ad to a real person using means it already has such as account data and contact channels. The entity relative model would matter more in cases where an actor only ever sees irreversibly pseudonymised identifiers and truly cannot work out who is behind them, while some other actor can. For example, fully anonymous sites.

Now what

Supervisory authorities now have a clear template for enforcement actions. It no longer suffices to act quickly after detecting a misuse or abuse. At least it's not a blanked solution. Websites and platforms would then need to approach their design seriously. I mean, just apply the GDPR. But until something happens, you may not be certain if the deployed solutions meet the requirements for the respective scale of the business.

Those wanting to be on the safe side may choose to prohibit anonymous posts, any personal data, any photos depicting people or documents. That would be an overkill, essentially destroying the web as we know it.

The judgment tells large sites that handle user content that they are not neutral hosts. They are controllers for the personal data in those ads, especially where content involves sensitive categories. That status brings design obligations.