Hacker Newsnew | past | comments | ask | show | jobs | submit | lingrush4's commentslogin

After reading the examples, I trust Zuckerberg more than the author of this article. And that's a really low bar. The evidence for Zuckerberg lying here is flimsy at best. It's almost like the author doesn't know what lying even is.


I would say it's a very high bar-- convincing you to trust him is literally Zuck's entire business from day one.


“They ‘trust’ me. Dumb fucks.” - Mark Zuckerberg


Socialism never took off in America because Americans know that disincentivizing work shrinks the economy and makes everyone poorer.


Your PM most definitely would not tell you to skip a feature that is needed for your emails to be delivered to Gmail accounts. What a preposterous thing to lie about.


The PM wouldn't know about that gmail behavior at that point in the development cycle.


Closing a small patch of airspace while military activity is occurring is not authoritarianism. Get a grip you lunatic


I don't think anyone said closing this airspace was an authoritarian act... double check the posts above.


No, but the whole situation being caused after shooting down a birthday balloon is macabre incompetence.


i think the comment you replied to said what you said


Trump needs to be impeached immediately for this. How dare he close airspace and then just lift that closure once the danger has passed.


What danger?


What kind of government would use their statutory authority to shut down an airport when there is a risk to the planes?

Why do you think the FAA doesn't have this authority? Or, why do you think the FAA shouldn't have this authority?

In other words: This may have been needed but poorly executed; this may have been incompetent planning and response. But I wouldn't call the FAA shutting down an airport "police state".


>> What kind of government would use their statutory authority to shut down an airport when there is a risk to the planes?

It could be either an incompetent government or an authoritarian government that is trying to militarize certain institutions of civilian life.

>> Why do you think the FAA doesn't have this authority? Or, why do you think the FAA shouldn't have this authority?

The FAA does indeed have the authority. The question is simply: why did the FAA choose to exercise its authority in this case? If there was a real danger to the public, then the FAA should be honest with the people and tell them what is the danger. That is what citizens should expect from a democratic government.

>> This may have been needed but poorly executed; this may have been incompetent planning and response. But I wouldn't call the FAA shutting down an airport "police state".

The reason why I ask if this is an example of police state behavior is because in this case the government apparently took drastic measures without explaining to the people why it was doing so.


Google ought to rethink its policy of disclosing government subpoenas to users. Every time this happens, the media uses it to attack Google. They'd be better off leaving users in the dark about these legally required data disclosures. Even if most users don't go crying to the media when it happens, it's still not worth it.


Ultimately it's better for the public and users to be informed about this occurring though. If Google wanted to they could salvage it and explain their legal duties and how that applies to these situations. I don't think Google is worried though. They have multiple captive markets and have seen continued growth so it's obviously not affecting the bottom line.

It's a good contrast to Apple where any bit of bad news that makes headlines becomes priority one to fix. Which just creates a privileged class of users and makes the brand look fragile.


Ever occur to you that it's good for Google if there's some public visibility of what Google is being forced to do?


That would solve exactly zero of the complaints surfaced in this lawsuit. Companies still have an incentive to maximize app usage regardless of whether the advertising is personalized.


In fairness, AI-generated CSAM is nowhere near as evil as real CSAM. The reason why possession of CSAM was such a serious crime is because its creation used to necessitate the abuse of a child.

It's pretty obvious the French are deliberately conflating the two to justify attacking a political dissident.


Definitely agree on which is worse! To be clear, I'm not saying I agree with the French raid. Just that statements about severe crimes (child sexual abuse for the above poster - not AI-generated content) being "lesser problems" compared to politics is a concerning measure of how people are thinking.


> The reason why possession of CSAM was such a serious crime is because its creation used to necessitate the abuse of a child.

Used to? Still does. A convincing fake is still only a fake.

> It's pretty obvious the French are deliberately conflating the two to justify attacking a political dissident.

Agreed. But the same conflation in the comments hereabouts is ... puzzling.

I mean, abuse of a photo == abuse of a child? Like, voodoo dolls? Creepy.


It may not be worse "objectively" and in direct harm.

However - it has one big problem that is rarely discussed... Normalizing of behaviour, interests and attitudes. It just becomes a thing that Grok can do - for paid accounts, and people think - ok, "no harm, no problem"... Long-term, there will be harm. This has been demonstrated over decades of investigation of CSAM.


>Normalizing of behaviour, interests and attitudes.

That's why all media depicting violence should be banned.

/s


Every AI system is capable of generating CSAM and deep fakes if requested by a savvy user. The only thing this proves is that you can't upset the French government or they'll go on a fishing expedition through your office praying to find evidence of a crime.


>Every AI system is capable of generating CSAM and deep fakes if requested by a savvy user.

There is no way this is true, especially if the system is PaaS only. Additionally, the system should have a way to tell if someone is attempting to bypass their safety measures and act accordingly.


> if requested by a savvy user

Grok brought that thought all the way to "... so let's not even try to prevent it."

The point is to show just how aware X were of the issue, and that they chose to repeatedly do nothing against Grok being used to create CSAM and probably other problematic and illegal imagery.

I can't really doubt they'll find plenty of evidence during discovery, it doesn't have to be physical things. The raid stops office activity immediately, and marks the point in time after which they can be accused of destroying evidence if they erase relevant information to hide internal comms.


Grok does try to prevent it. They even publicly publish their safety prompt. It clearly shows they have disallowed the system from assisting with queries that create child sexual abuse material.

The fact that users have found ways to hack around this is not evidence of X committing a crime.

https://github.com/xai-org/grok-prompts/blob/main/grok_4_saf...


Is there evidence this is the real prompt?


Grok makes it especially easy to do so.


What makes Grok special compared to random "AI gf generator 9001" which is hosted specifically with the intent of generating NSFW content?


If AI GF Generator 9001 is producing unwilling deepfake pornography of real people, especially if of children, feel free to raid their offices as well.


> What makes Grok special

X. xAI isn’t being raided. X is. If Instagram bought a girlfriend generator and built it into its app, it would face liability as well.


>Every AI system is capable of generating CSAM and deep fakes if requested by a savvy user. The only thing this proves is that you can't upset the French government or they'll go on a fishing expedition through your office praying to find evidence of a crime.

If every AI system can do this, and every AI system in incapable of preventing it, then I guess every AI system should be banned until they can figure it out.

Every banking app on the planet "is capable" of letting a complete stranger go into your account and transfer all your money to their account. Did we force banks to put restrictions in place to prevent that from happening, or did we throw our arms up and say: oh well the French Government just wants to pick on banks?


Every artist is capable of drawing CSAM. Every 3D modeler can render CSAM. Ban all computers !!


Well every human can be an artist with some training. I guess the solution is to ban humans.


And the artist is punished for doing so. Thank you for proving my point.


But we don't ban pencils do we?


right and X isn't having it's GPUs banned


You can use photoshop to create CSAM too, should that be banned?


Reddit and BlueSky would be the first to go if that were actually the criteria for banning a platform.


Why? Has Reddit given their users tools to generate CSAM and non-consensual sexualized imagery? Bluesky certainly hasn't


Are you illiterate? The comment I was replied to said X should be banned for trying to manipulate public opinion. It said nothing about CSAM.


God I hope so


You prefer those be shut down to the one run by a pedo who happens to be the richest person in the world and meddles in elections across the global personally with money?


Why not all 3?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: