Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Congress added Section 230 in direct reaction to two court cases (against Prodigy and CompuServe) which made service providers liable for their user's content when they didn't act as pure common carriers but rather tried to moderate it but, obviously and naturally, could not perfectly get everything.

I know that. I spoke imprecisely; my framing is that this imperfect moderation doesn't take away their immunity — i.e. they are still treated as if they were "just the messenger" (per the previous rules). I didn't use the actual "common carrier" phrasing, for a reason.

It doesn't change the argument. Failing to apply a content policy consistently is not, logically speaking, an act of expression; choosing to show content preferentially is.

... And so is setting a content policy. For example, if a forum explicitly for hateful people set a content policy explicitly banning statements inclusive or supportive of the target group, I don't see why the admin should be held harmless (even if they don't also post). Importantly, though, the setting (and attempt at enforcing) the policy is only expressing the view of the policy, not that of any permitted content; in US law it would be hard to imagine a content policy expressing anything illegal.

But my view is that if they act deliberately to show something, based on knowing and evaluating what it is that they're showing, to someone who hasn't requested it (as a recommendation), then they really should be liable. The point of not punishing platforms for failing at moderation is to let them claim plausible ignorance of what they're showing, because they can't observe and evaluate everything.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: