By Odia Kagan
How far does a platform’s responsibility extend when a user posts someone else’s personal data in a classified ad, especially one involving sensitive subject matter like sex work? The Italian Data Protection Authority (Garante) recently fined online classifieds platform Bakeca S.r.l. after an unknown user published two ads, including an explicit offer for sex work, listing the phone number of a person who had nothing to do with the ads and never consented to their publication. The decision, which leans heavily on the CJEU’s Russmedia framework, sends a clear signal to online marketplace operators: if you are the data controller, terms-of-service disclaimers will not shield you from liability when third-party sensitive data slips through without proper verification.
What Happened
An unknown individual created accounts using temporary email addresses and posted two classified ads on Bakeca’s platform. Both ads listed the complainant’s phone number as the contact. The complainant, who did not engage in sex work and was completely unaware the ads existed, started receiving calls from prospective customers. She reached out to Bakeca, got the ads taken down, and then filed a complaint with the Garante.
Bakeca’s Defense
Bakeca argued it had adequate safeguards in place, including an automatic keyword filter that checks for prohibited words and previously blocked numbers, a manual review process for “sensitive” ad categories handled by a human quality team, verification of the email address used to create accounts, and the ability to block a phone number after a complaint or second report. The company also pointed to provisions in its privacy policy and terms of service requiring users not to post third-party personal data without consent.
The Garante’s Findings
The Garante was unpersuaded. It found multiple violations of GDPR.
On the security measures, the Garante found that none of Bakeca’s existing safeguards actually addressed the core problem, namely: verifying whether the person posting an ad actually owned the phone number listed in it. The one-time password only verified the email address used to create the account. Obscuring the phone number behind a click did not prevent the data from being processed in the first place. The blocking mechanism was purely reactive, only kicking in after a complaint. And the manual quality review was limited to checking for illegal content and terms-of-use compliance, not phone number ownership.
Garante held that the complainant’s phone number constituted special category data under Article 9 GDPR because of the sexual context of the ads in which it appeared, despite the fact that the complainant herself did not engage in sex work. Inaccurate data can still be “sensitive”. Therefore, Garante held that Bakeca had no legal basis for this processing.
On the terms of service defense, the Garante made clear that simply telling users not to post third-party data without consent does not relieve the platform of its own obligations as a data controller.
The Russmedia Framework
Leaning on the CJEU’s Russmedia ruling, the Garante outlined the steps an online marketplace operator must take before publishing ads containing sensitive data:
- Identify advertisements that contain sensitive data under Article 9(1) GDPR.
- Verify whether the advertiser is the person whose sensitive data appear in the ad.
- If not, refuse to publish the ad unless the advertiser can demonstrate that the data subject gave explicit consent under Article 9(2)(a), or that another Article 9(2) exception applies.
The Garante ordered Bakeca to adopt technical and organizational measures to verify whether users listing a phone number in “sensitive” ad categories are the actual owners of that number, or are authorized by the actual owner. It also issues a 5,000 EUR fine constituting 0.025% of the maximum statutory fine.
What does this mean for platforms, including in the US?
This is a notable decision for platforms that host user-generated classified ads, particularly in categories that touch on sensitive subject matter. Per Garante’s decision, and more broadly CJEU in Russmedia, you cannot rely on your terms of service to shift GDPR compliance obligations onto your users. If you are the data controller, you need proactive, pre-publication verification mechanisms, especially where special category data may be involved.
U.S.-based companies should take note as well. if the Trump AI America bill, proposing to repeal Section 230 of the Communications Decency Act gains traction, platforms could lose the very immunity shield that has historically insulated them from liability for user-generated content. Indeed, the case of Stratton Oakmont, Inc. v. Prodigy Services Co. (1995), one of the key decisions that helped motivate the passage of Section 230 of the CDA, has very similar facts. In that case, Prodigy, an early online service provider, was held liable as a publisher for defamatory content posted by an anonymous user on one of its message boards. The core reasoning was that because Prodigy exercised editorial control over user content (using content guidelines and screening software), it was treated more like a traditional publisher than a passive distributor.
The Garante’s decision in this case arguably comes full circle: it holds that a platform that does exercise a degree of editorial review (Bakeca’s manual quality review, keyword filters, etc.) still hasn’t done enough, and that more proactive verification is required. If Section 230 were repealed or significantly narrowed under the AI America proposal, U.S. platforms could find themselves in a similar position to Bakeca, facing direct liability for third-party content even where they have some moderation in place, but not enough to catch the specific harm at issue.