Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Seems like a pretty big overreaction IMO. Advertisements deserve more strict regulation than general user-generated content because they tend to reach far more people. The fact that they aren't has resulted in something like 10% of all ads shown being outright scams or fraud[0]. And they should never have allowed the ad to air in the first place - it was patently and obviously illegal even without considering the GDPR.

If these companies aren't willing to put basic measures in place to stop even the most obviously illegal ads from airing, I have a lot of trouble having sympathy for them getting their just desserts in court.

[0]: https://www.msn.com/en-us/money/personalfinance/meta-showed-...



> Advertisements deserve more strict regulation than general user-generated content because they tend to reach far more people.

They deserve strict regulation because the carrier is actively choosing who sees them, and because there are explicit fiscal incentives in play. The entire point of Section 230 is that carriers can claim to be just the messenger; the only way to make sense of absolving them of responsibility for the content is to make the argument that their conveyance of the content does not constitute expression.

Once you have auctions for ads, and "algorithmic feeds", that becomes a lot harder to accept.


>The entire point of Section 230 is that carriers can claim to be just the messenger

Incorrect, and it's honestly kinda fascinating how this meme shows up so often. What you're describing is "common carrier" status, like an ISP (or Fedex/UPS/post office) would have. The point of Section 230 was specifically to enable not being "just the messenger", it was part of the overall Communications Decency Act intended to aid in stopping bad content. Congress added Section 230 in direct reaction to two court cases (against Prodigy and CompuServe) which made service providers liable for their user's content when they didn't act as pure common carriers but rather tried to moderate it but, obviously and naturally, could not perfectly get everything. The specific fear was that this left only two options: either ban all user content, which would brutalize the Internet even back then, or cease all moderation, turning everything into a total cesspit. Liability protection was precisely one of the rare genuine "think of the children!" wins, by enabling a 3rd path where everyone could do their best to moderate their platforms without becoming the publisher. Not being a common carrier is the whole point!


> Congress added Section 230 in direct reaction to two court cases (against Prodigy and CompuServe) which made service providers liable for their user's content when they didn't act as pure common carriers but rather tried to moderate it but, obviously and naturally, could not perfectly get everything.

I know that. I spoke imprecisely; my framing is that this imperfect moderation doesn't take away their immunity — i.e. they are still treated as if they were "just the messenger" (per the previous rules). I didn't use the actual "common carrier" phrasing, for a reason.

It doesn't change the argument. Failing to apply a content policy consistently is not, logically speaking, an act of expression; choosing to show content preferentially is.

... And so is setting a content policy. For example, if a forum explicitly for hateful people set a content policy explicitly banning statements inclusive or supportive of the target group, I don't see why the admin should be held harmless (even if they don't also post). Importantly, though, the setting (and attempt at enforcing) the policy is only expressing the view of the policy, not that of any permitted content; in US law it would be hard to imagine a content policy expressing anything illegal.

But my view is that if they act deliberately to show something, based on knowing and evaluating what it is that they're showing, to someone who hasn't requested it (as a recommendation), then they really should be liable. The point of not punishing platforms for failing at moderation is to let them claim plausible ignorance of what they're showing, because they can't observe and evaluate everything.


Except this isn't limited to ads is it? From the post it sounds like the ruling covers any user content. If someone uploads personal data to Github now Github is liable. In fact, why wouldn't author names on open source licenses count as PII?


The judgement is a bit more nuanced than that: https://curia.europa.eu/juris/document/document_print.jsf?mo...

The court uses the phrase “an online marketplace, as controller” in key places. This suggest to me that there can be online marketplaces that are not data controllers.

The court cites several contributing factors to treat the platform as data controller: they reserved additional rights to upload content, they selected the ads to display. Github only claims limited rights in uploaded content, and I'm not sure if they have any editorialized (“algorithmic”) feeds where Github selects repository content for display. That may make it less likely that they would be considered data controllers. On the other hand, licensing their repository database for LLM training could make them liable if personal data ends up in models. I don't think that's necessarily a bad thing.


Github does include some small amount of algorithmic feeds in its recommendation engines. I have half-a-dozen projects "Recommended for you" on my github home page.

I doubt that is enough to trigger this ruling, but algorithmic content is absolutely pervasive these days.


The author of the article is claiming it extends beyond ads.

That does not appear to be what the court actually said, however.

And I 100% believe that all advertisements should require review by a documented human before posting, so that someone can be held accountable. In the absence of this it is perfectly acceptable to hold the entire organization liable.


The ruling is about an advertisement, but:

> There’s nothing inherently in the law or the ruling that limits its conclusions to “advertisements.” The same underlying factors would apply to any third party content on any website that is subject to the GDPR.

So site operators probably need to assume it doesn’t just apply to ads if they have legal exposure in the EU.


You could always sue GitHub to find out.

Personally, I'm not buying the slippery slope argument. I could be wrong of course but that's the great thing about opinions: you're allowed to be wrong :)


> why wouldn't author names on open source licenses count as PII?

They are but you can keep PII if it is relevant to the purpose of your activity. In this case the author needs you to share his PII, in order to exercise his moral and copyright rights.


Yeah, it sounds like mirroring a repo to GitHub would violate this, as author names and emails are listed in commit history.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: