The US 5th Circuit Court of Appeals ruled that certain administration officials – namely in the White House, the surgeon general, the US Centers for Disease Control and Prevention, and the Federal Bureau of Investigation – likely “coerced or significantly encouraged social media platforms to moderate content.”
What California wants is clarification and explanation of the moderation process as it applies to X. a product disclosure like this is common in nearly every other consumer product in the US. Prop 65 for example routinely mandates this sort of disclosure for lead or cadmium content in a product.
The reason musk specifically does not want to disclose this information is because the moderators were all sacked a year ago...i think California knows this.
It is not a question of what California wants. California is attempting to coerce. X is arguing that CA's coercion violates both the 1st amendment and Section 230. Glad to engage further if you or others want to address the merits of those arguments. No need to impugn intent onto specific individuals.
All laws are coercive. That's how laws work. So I don't even know what your first statement is trying to get at.
$15k/day -- 5M/year -- on companies over $100M in gross revenue (much less the several billion generated by Twitter) is not more coercive than many other laws. The penalties for some laws go up to and including death... so this is definitely within the typical range of penalties.
The law requires disclosing your policy and how you applied it. Musk is out on a limb if he's claiming that giving stats on what actions were taken is the same as the action itself.
> Prop 65 for example routinely mandates this sort of disclosure for lead or cadmium content in a product.
The dangerous (overt?) implication you're making is that some speech is "poisonous" and the government needs to step in and make sure the people aren't being "poisoned"
No? All they're requiring is clarification on what moderation (if any) is happening. This is almost the opposite of what you're describing, that more moderation you do the harder such clarifications become - if you do none then there's nothing to disclose.
Of course, arguably most people _want_ some minimum level of content moderation, so whether it's beneficial to do more or less content moderation is up to the company, they just have to disclose it.
> Words can kill, a Massachusetts Juvenile Court judge decided last Friday, when he found 20-year old Michelle Carter guilty of involuntary manslaughter in the 2014 suicide of her then-boyfriend, Conrad Roy III.
"""
The Court in Brandenburg, in a per curiam opinion, held that Ohio's Syndicalism law violated the First Amendment. According to the Court, "constitutional guarantees of free speech and free press do not permit a State to forbid or proscribe advocacy of the use of force or of law violation except where such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action.
"""
The law does not restrict that. "Shouting fire in a crowded theatre" is a myth. Anyone parroting this immediately outs themselves as lacking even the most basic knowledge of first ammendment.
The court case where that quote came from was overturned 54 years ago!
I was not aware of that court case, but that is enlightening. I have much learning to do.
But based on reading through the report at the findlaw article in a sibling comment, (in my opinion) I think it's a pretty dangerous precedent, and definitely a pillar of the breakdown of modern political discourse.
"The act of shouting fire when there are no reasonable grounds for believing one exists is not in itself a crime, and nor would it be rendered a crime merely by having been carried out inside a theatre, crowded or otherwise."
Your other responses have correctly explained why you're wrong but I just want to add a little bit more context: "Shouting fire in a crowded theater" was a euphemism by Justice Holmes to describe the act of protesting the draft.
It did not come from a case about a theater and a human stampede as many naturally assume. It came from a case about a war protestor being arrested for telling people they should resist the draft (decidedly political speech.)
Instead of Prop 65, look at it as like nutritional labeling rules. There is no implication in having to list e.g. how much protein is an a serving of your product means that protein is dangerous.
Yes, but if you're not being pedantic for the sake of scoring points, all the aforementioned things are capable of causing significant harm if mishandled, so it's still quite relevant.
Having a post arbitrarily or even maliciously being removed from a social media platform being equated to physical harm is an extremely absurd, out of touch, pampered, and privileged position.
Mishandling of heavy metals can cause lifelong affects not just to those handling them, but to anyone in the vicinity.[0] it’s estimated that 1M people die per year from lead poisoning[1].
Content moderation cannot directly cause any physical harm. If you consider indirect physical harm related to all social media (which I’d have more sympathy toward), it would not come close to the affects of heavy metals and other substances known to the state of California to cause cancer, birth defects, or other reproductive harm.
But the government isn't allowed to regulate the speech based on harm unless that harm would result from lawless action which is also incited by the speech and probable to happen based on the speech.
People should really be more considering this as "Truth in advertising" type law. This isn't about whether Twitter follows it's policies as written, as you would be hard pressed to convict any company for merely not following it's own rules for something that isn't illegal (allowing inflammatory speech on your platform, or even signal boosting such speech), but more at requiring companies be honest about how they moderate their consumer participation.
This law is so consumers can make educated choices about the platforms they want to use.
That's a very good point and I have no arguments with it.
Unfortunately, nothing about the California law really addresses it. The Fifth Circuit Court decision regarding coercion of social media sites will bind to the states via the Fourteenth Amendment, so California can't really enforce anything if they disagree with a company's moderation policy.
That means the law reduces to perfunctory data collection, and it doesn't really tell consumers anything that logging into the site and going "Gee, this site sure is full of white supremacists advocating stochastic terrorism and nobody does anything about it" wouldn't tell them.
I don't understand why anyone would downvote this. Can't people ask questions these days? Especially questions that prompt significant discussions and clear the climate and misconceptions some of us have?
They are both under the jurisdiction of the US federal court system. Something applying to the white house means it applies to individual states as well.
I'm not sure that is necessarily true, but the essence of your point I think stands which is that it's likely that based on past outcomes that California could face a similar result as the case you are referring to.
Literally not in this case. Circuit courts establish binding precedence in their circuit, but not elsewhere. Out-of-circuit opinions can be used for persuasive evidence, but there is absolutely nothing that requires the 9th Circuit (which includes California) to listen to what the 5th Circuit says. Especially when the 5th Circuit is disagreeing with every other circuit to have considered the matter. [I haven't read the opinion in this case to know what it's asserting, but I do know that every opinion I did read on whether or not the government urged COVID-19 moderation qualified as unconstitutional state action concluded that the plaintiffs hadn't met their showing that it did.]
Not a lawyer but if that’s the case, could California say “Fine, but Twitter can no longer do business in California”?
Edit: not necessarily saying they should, I’m just
wondering if they can.
Edit 2: Looks like the most they could do is make it harder for social media companies in general to do business. If they were perceived as targeting Twitter then they could have grounds to sue.
Based on 20 minutes of reading so grain of salt applies.
Umm, the bill of rights is a set of restrictions on the _federal_ government. The last one is explicitly a statement that the states can do a lot of things that the federal government _can't_.
There is the supremacy clause, but goodness knows where that would end up here. _Everything_ involving real money or power seems to make it to the supreme court these days, and who knows what the political landscape will look like by the time it does (yes, I am asserting that the supreme court has become more political than it used to be, _and_ that it used to be pretty political...).
> the bill of rights is a set of restrictions on the _federal_ government
The First Amendment as it is literally worded is, since it specifically says "Congress shall make no law...". But the rest of the amendments have no such restriction; they just say certain things shall not be done, period. Given the Supremacy Clause, that means those provisions should apply to all levels of government, not just federal. (Granted, the courts originally did not interpret them that way, but IMO they should have.)
That said, current jurisprudence, regardless of the literal wording of the bill of rights, is that they apply to the States, even the First Amendment. IIRC most Supreme Court decisions along these lines have cited the Fourteenth Amendment.
> Umm, the bill of rights is a set of restrictions on the _federal_ government. The last one is explicitly a statement that the states can do a lot of things that the federal government _can't_.
Taken literally, yes. But legally, many (but not all) for the rights have been 'incorporated' to apply to the states. This includes First Amendment.
> Umm, the bill of rights is a set of restrictions on the _federal_ government.
That hasn't been the case since the ratification of the 14th Amendment way back in 1868.
All persons born or naturalized in the United States, and subject to the jurisdiction thereof, are citizens of the United States and of the State wherein they reside. No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law; nor deny to any person within its jurisdiction the equal protection of the laws.
Courts have repeatedly held that the Bill of Rights does apply to the states, by means of this so-called "due process clause" in the 14th Amendment.
Edit: changed "incorporation clause" to "due process clause", as that seems to be the name under which it is more generally known.
If Twitter solicits you to purchase Twitter Blue, that's Commercial Speech. If Twitter bans your account for praising Hitler, that's [Twitter exercising] political speech: Twitter would be protected by the First Amendment. The mere fact Twitter is a commercial, monetized service doesn't trigger a Twitter-wide First Amendment exception—any more than, say, the New York Times being a for-profit corporation opens the door for the feds to censor its political columns. Even if they're behind a paywall.
Commercial Speech is a narrow carve-out for "advertisements and solicitations". It's not applicable to Twitter moderation.
There is a good argument that company policies about product use is commercial speech. "Here take this opioid, we have funded studies that say it won't hurt you" got regulated pretty hard. "We think the 'woke mind virus' is worse than capital-F Fascism and will moderate that way" is very much about Twitter's product.
Sure, but Twitter banning (or not banning) your account for praising Hitler is political speech, however, Twitter stating "we are/aren't a platform for free speech" or "our moderation will/won't ban your account for praising Hitler" is commercial speech similar to it soliciting you to purchase Twitter blue, it's a statement about the media service they're offering as part of their business, and that's something that can be reasonably compelled.
The law does not make any requests about how Twitter should moderate things, it asks for information about how Twitter does moderate things. First amendment protection should ensure that government is prohibited to impose restrictions if a company says they will/won't ban accounts for praising Hitler, however, the people certainly have the right to take action in response to that, and the government has the right to compel Twitter to disclose to these people truthful information about their media product.
If they have a policy document stating "posts which contain more than three letters 'z' shall be deleted", they have a right to moderate this way if they wish - however, do they have a constitutional right to keep that policy document secret from the public? The way I see it, laws are permitted to regulate the disclosure of company policies.
[0]: https://www.nytimes.com/2023/09/08/business/appeals-court-fi...