Hacker Newsnew | past | comments | ask | show | jobs | submit | super256's commentslogin

Some thoughts

- secure kernel WILL get hijacked and be completely invisible to anti cheats. Which would be funny.

- Microsoft won't port back the attestation process to win 10 (although secure kernel exists there too), forcing all gamers, where the AC adopts this attestation, to install win11

- trying to lock out Linux for sure, which is a funny coincidence given that Valve is partnering with anti cheat developers (eg EAC and Battleye) to support Linux


Win10 only supports this in specific high-sec enterprise configurations and, as indicated, Microsoft will not be porting that back to Windows 10. One can reasonably expect that Windows 10 support will be killed in favor of this new API, specifically because it means game studios can stop paying for soon-irrelevant development effort into Windows anti-modding. And I bet TF2 starts blocking unattested (and, so, Windows 10) players within one year of Valve enabling the new attestation API on Steam hardware in Windows/Linux.

Linux is and has for years been capable of supporting all of this at any time, and when-not-if Valve enables attestation of a clean sealed-booted Steam Linux environment for their hardware, AAA multiplayer games will begin allowing only sealed-attested Steam Linux players to join multiplayer games from Linux.

Microsoft isn’t doing this to screw Linux. Microsoft is doing this to avoid losing the secured PC gaming market to Valve. They already lost the (secured) console gaming market, after all.


Valve let bots infest and ruin TF2 servers for 8 (eight) years straight before doing anything. There's no way they'd add anything like that to TF2 within one year.

Of course not. They’d just add it to VAC and make it an opt-in flag for all Steam games. And then check that box for TF2 et al., because one click in a metadata editor to lock out 99.999% of software cheaters is a no-brainer for any multiplayer game — including their own! And as a bonus, that’s an upsell driver for sealed-capable hardware like the upcoming Steam console, when people find out that their Win10 PCs can’t access their inventories next year and it’s either Windows 11 or Steam Linux. Mod it all you want for local play, then dual-boot to a competition-grade sealed OS to join lobbies? Hard to see how they’d turn that opportunity down.

do you know when a Steam switched to a 64 bit executable?

last month

valve are not the company you think they are


There was no inherent profit or other benefit to Valve from doing so. Of course they didn’t bother.

> EAC and Battleye

They may be partnering with them but support for competitve titles is rather limited. For example, the most prominent Battleye title (iirc), Rainbow Six Siege, is not support on Linux via Steam due to Battleye blocking it. Valorant, LoL, BF6 or CoD also don't work ime.


Particularly frustrating, because Rainbow Six Siege runs spectacularly on linux, but the moment you join a multiplayer session the anticheat forces a crash-to-desktop.

For many of these games it's a choice. They choose not to support linux. Perhaps one day that will change.

I've been playing online multiplayer games, including competitive FPS and more, for nearly 3 decades. Cheating has never been such a problem that it made me quit a game. So much of this is way overplayed by wannabe-super-sweat try-hards, thinking they're competing in high-stakes games.

So we cede more and more control of our computer over to video game(!!) companies, going deep down the rabbit hole of kernel-level anti-cheat and worse to come.

It's a freaking video game... have fun. If someone cheats, find a new server. It's really that simple.


They will need to sooner or later. Linux has more momentum than ever, and saying "players on steam deck/steam machine/bazzite can't play our game" seems like a losing long term strategy.

So, the problem with anticheat on Linux is there's no "safe" reference version of Linux that you can enforce to be running. This is a good thing. It's supposed to be modifiable. This fundamentally conflicts with the goal of anticheat which is to stop you modifying it.

I predict they won't allow all Linux but only the specific version Valve puts on the Steam Deck/Machine, and if you modify it then your games won't run again.


That hasn't stopped Android from offering attestation while they use Linux.

>It's supposed to be modifiable.

https://www.kernel.org/linux.html

I have not seen that as a project goal.


It's a balance between allowing linux and (theoretically) opening the door for more cheaters. Saying "players can't play our game because every match has a cheater" is just as bad.

I can't say which has more weight but it's not a cut and dry situation, at least until Linux has anti-cheat.

Right now developers could make an "unattested" queue for linux and other non-TPM windows systems. Which could also serve as a black-hole for cheaters, so maybe there's some value in that.


>trying to lock out Linux

Only because desktop Linux will be behind on security.

Macs already got this ability in 2023 which allowed for a user mode anticheat for Riot Games to be made that successfully prevented cheating. Now Windows is getting attestation that is the game running on a secure system.

If desktop Linux ever gets around to this then a anticheats can add support for it and it will be much easier then them needing to make a kernel anticheat for a platform that few people use.


I absolutely won't call client side anti-cheat a "security" feature and I find the framing very questionable.

This is specifically an integrity feature. And integrity is typically classified under security.

Proving my device's integrity is for me. If I want to modify the code on my device and don't want you to know that I did, that's my right.

Allowing third parties to measure it is a security violation, and a freedom violation if there's no way for me to spoof what I'm running on my device and they block me based on that.


no one is saying that you can't modify the system in this world. they are saying you can't run multiplayer on this system. Running multiplayer games isn't a right.

Now, your issue is extended for instance when people are locked out of their banking apps for running modified systems, and I'm much more sympathetic to you there. But just because a technology can cause bad things in one circumstance, doesn't mean its bad in all circumstances. It's up to society to say, its good to use this here, but bad to use it there. If one believes that society can't do that well, then all technology should be considered problematic.


The whole point of remote attestation is to prove integrity of remote machines.

>that's my right.

It's common for states to make fraud unlawful due to being an antisocial behavior. I similarly believe that lying about your the integrity of an app running is similarly antisocial behavior.

>Allowing third parties to measure it is a security violation

How does it break your security model?

>a freedom violation

It turns out that such freedom when given to bad actors turns into the freedom for them to ruin games by cheating. People still have the freedom to do whatever they want on their own computer, but they just can't hack a game and then fraudulently claim they aren't using hacks.


It is however integrity on behalf of a third party, and possibly antagonistic to the user.

> If desktop Linux ever gets around to this

I don’t really understand what that means. Are you, or anyone, expecting a signed Linux kernel by some organization (say Valve or Debian or whatever) that will be the “Gaming Kernel”? If not, no Linux kernel feature is safe from 1 patch and a custom build.


Stock Linux kernel in Fedora, for example, is signed by MS, so SecureBoot allows to boot it without modification. Kernel booted by SecureBoot is locked down by default. To unlock it, you need to patch kernel source, rebuild it, sign it with your own key, and install this key via UEFI to boot it in SecureBoot mode. Your custom key will not pass remote attestation.

They are not signed by MS they are dual signed by a CA that MS runs as a service for UEFI secure boot as well as the distro’s CA.

If you were around in the late 2000s when UEFI SecureBoot was being proposed, you’d remember the massive hysteria about how “SecureBoot is a MS plot to block Linux install”. Even though the proposal was to just allow the UEFI to verify the sig of the binary it’ll boot, and to allow the user to provide the UEFI with the keys to trust, the massive fear was that MB manufacturers will just be too lazy (or be bought by MS) that they will only allow MS keys, or that the process to enlist a new key would be too difficult to sufficiently discourage people from installing Linux (because you know, I’m all for the freedom and fuck-Microsoft camp, until its expected that I verify a signature) so Microsoft offered a service for CA service, like https CAs, but for boot signing.

Assuming you’re a good Linux user, you can always just put your favorite distro signing key in your UEFI without accepting MS CA n there.


Well if you walk backwards 10 paces and look at the big picture here, what MS did enables anti-cheat attestation via TPM, and that in turn can act as a feature that structurally - via the market - reduces the appeal of Linux.

Signing your own custom-built kernel (if you need to adjust flags etc., like I do) won't result in a certification chain that will pass the kind of attestation being sketched out by the OP article here.


Yes because you’re trying to communicate that trust to other players of the game you’re playing as opposed to yourself.

It’s why I hate the term “self-signed” vs “signed” when it comes to tls/https. I always try to explain to junior developers that there is no such a thing as “self-signed”. A “self-signed” certificate isn’t less secure than a “signed” certificate. You are always choosing who you want to trust when it comes to encryption. Out of convenience, you delegate that to the vendor of your OS or browser, but it’s always a choice. But in practice, it’s a very different equation.


The problem comes in when you need to flip a flag that isn't set in the default kernel build for compatibility with your hardware and configuration.

Exactly, then you are depending on that third party (be it MS, Apple, Valve, Debian, etc) to care enough about your obscure setup to support it.

Many people would be happy with a Valve gaming kernel.

Many are happy with a Sony gaming kernel as is.

I mean the approach the article is talking about. Creating a safe hypervisor and safe kernel that games can get an attestation to in order to trust that they are running on a secure platform.

Yeah, then the “safe kernel” is Valve’s kernel.

Kernel anti cheat in windows has already been used to deploy malware.

It was inevitable when this even started.


it will be behind on security gimmicks

It’s not a gimmick feature. It’s just the “user” is always, inherently, in “control” of the kernel itself when it comes to Linux. That’s not true with NT or Darwin. You (a 3rd party) can always verify NT or Darwin’s “integrity” by checking it’s cryptographically signed by Microsoft or Apple. Other than assuming that Valve (or Sony, Nintendo, Debian, SUSE, RedHat, etc) is the “trusted kernel” for your game, you can’t do that with Linux. And the moment you say “My application only runs on Kernels signed by {insert organization}, are you really “Linux”?

The reality is the overwhelming majority of desktop linux users are probably using a kernel shipped by their distro, be it Fedora, Debian, Ubuntu, Valve, whatever. Those kernels could be attested.

I agree with your sentiment though. It's a wild future we're considering, just so some people can play video games and complain less about supposed cheaters (or often, skill issues, but I digress).


Yeah, I agree. Majority of people running any OS are expecting a vendor that manages their OS. Even those running Arch are rarely patching things by hand and just following whatever is in the official repos or wiki.

However, I believe part of the huge positive sentiment about “Linux gaming” online is that, so far, it’s been truly “Linux gaming”. Once it becomes “Valve’s Gaming” it’s really no different than PS5 or Switch using Linux for its base OS but it’s really Sony or Nintendo’s device.


There is no recession in the USA.

Well, let's put it this way: There are the numbers reported on financial websites (that are best described as neither good nor bad. As in "it feels" there is about as much good news as bad. There is stock market performance, there is loan defaults, both lines going up into the right), and there is what my family and the folks back home and friends are experiencing.

What I mean is this is the financial "good news":

https://www.youtube.com/watch?v=jydjuFWyoD8


There are a lot of people on HN that will quickly say "when a metric becomes a target, it ceases to be a good metric", but then the next day say "There isn't a recession, the S&P is up".


Great neologism. We should also have vibeflation, the disconnect between the bullshit inflation figures published by politicians and the real inflation people have been seeing in the past few years.

Right. And there is no war in Ba Sing Se.

I assume your phrasing is specifically intended to evoke "there is no war in Ba Sing Se"

Technically there isn't a recession but, if you split by sectors, you see that all sectors not related to the AI investment boom are in the red. The question is: is it a natural consequence of investment shift to better technologies or a real problem that is temporarily hidden by an AI bubble ?

(https://fortune.com/img-assets/wp-content/uploads/2025/12/Sc...)


Europe has always been known for being governed by the rule of law. If we now start breaking laws and rights, especially regarding property/ownership, this will strongly backfire in the future. This can quickly become a slippery slope towards Willkürsjustiz. It is exactly the same as with the Russian assets held in Belgium at Clearstream. Selling them is a no-no.

There is ample precendent for impounding the assets of hostile nations. The Soviets did it to Germany in WW2, so they cannot really claim that they are opposed to that practice.

The only reason why this seizure of russian money in Belgium might be a bad idea is reciprocity. Russia would of course then try to seize European assets in Russia.

And regarding ships, prize law is still internationally accepted and in effect. Ukraine can offer prize letters to privateers or foreign navies, allowing the seizure of Russian ships. Or they can seize ships themselves. When those ships are then in a Ukrainian or allied harbor, a Ukrainian admirality court then assigns ownership of the vessel and all goods to the ones who brought it up. https://en.wikipedia.org/wiki/Prize_(law)



He was greedy when others were fearful ;)

checks notes

Nothing. Terry A. Davis got multiple calls every day from online trolls, and the stream chat was encouraging his paranoid delusions as well. Nothing ever happened to these people.


It would be interesting to see the whole transcript rather than cherry picked examples. The first inputs would be the most interesting.

> regulation

How would you regulate this tool? I have used ChatGPT as well to brainstorm a story for a text adventure, which was leaned on Steins;Gate: a guy who has paranoia, and becomes convinced that small inconsistencies in his life are evidence of a reality divergence.

I would not like to see these kind of capabilities to be removed. Rather, just don't give access to insane people? But that is impossible too. Got any better ideas to regulate this?


I'm sure the between the money and the talent, they can find a solution? I mean these LLM's are already capable of shutting down anything politically sensitive, borderline grey area, and outright illegal, right? So it's no so farfetched that they can figure out how to talk fewer people into psychosis / homicide / suicide.

I'm not going to pretend I"m smart enough to walk into OpenAI's offices and implement a solution today.. but completely dismissing the idea of regulating them seems insane. I'm sure the industrialists ~100 years ago thought they wouldn't be able to survive without child labor, paying workers in scrip, 100 hour work weeks, locking workers in tinder boxes, etc. but, survive they did despite the safety and labor regulations that were forced on them. OpenAI and co are no different, they'll figure it out and they'll survive. and if they don't, it's not because they had to stop and consider the impact of their product.


These AI companies are throwing hundreds of millions of dollars at _single developers_. There is the wherewithal but there is no will.

A girl that was my friend some years ago was having a psychotic episode once, and I told her that no one is following her, no one is monitoring her phone and she probably went schizo probably because of drug abuse. She told me I'm lying ans from the KGB; she went completely mad. I realize that this is actually dangerous for me and completely cut ties, although I sometimes browse one of her online profiles to see what she posts.

I don't think OpenAI should be liable for insane behavior of insane people.


I think you are messing up things here, and I think your comment is based on the article from semi analysis. [1]

It said: OpenAI’s leading researchers have not completed a successful full-scale pre-training run that was broadly deployed for a new frontier model since GPT-4o in May 2024, highlighting the significant technical hurdle that Google’s TPU fleet has managed to overcome.

However, pre-training run is the initial, from-scratch training of the base model. You say they only added routing and prompts, but that's not what the original article says. They most likely still have done a lot of fine tuning, RLHF, alignment and tool calling improvements. All that stuff is training too. And it is totally fine, just look at the great results they got with Codex-high.

If you got actually got what you said from a different source, please link it. I would like to read it. If you just messed things up, that's fine too.

[1] https://newsletter.semianalysis.com/p/tpuv7-google-takes-a-s...


I bought from GOG once, and downloaded their launcher. Then, I started the game, played for maybe an hour, put my PC to sleep and went to bed. Then, the next next day, I resumed my PC from sleep, closed the game, and because I didn't like it, decided a few days later to request a refund.

The game had 26 hours or so logged, because Galaxy has a poor way to log hours. Apparently the interval between game start and game end is the time you played the game.

The support declined my refund request, I tried to explain that I didn't even get the achievements of after the tutorial and that I could impossibly have played that many hours because I was simply not on my PC.

The gist is: If you buy a game from GOG which you might won't like: NEVER download galaxy, only the offline installers! I didn't do that because it was too convenient to download their launcher, as the offline installer of the game I played (Baldurs Gate 3) was split into many, many files, which I would have to download one by one and install them all by hand.

Still sour to this day that I have not gotten my 50€ back. Steam never had such issues for me, and even if you can at least ask their support to escalate the ticket so someone from L2/L3 or even engineering looks at your ticket.


You do put your PC to sleep without closing your programs !?

Yes! That's exactly what the sleep mode is for.

I am not an anxious person. But that thing, "waking up sleeping computer with programs freezed in it", makes me anxious.

I just can't...


You better not buy mobile devices a la steam deck or laptops then.

You're right, I

    sudo shutdown now

Every time before closing the lid of my laptop...

Is there a rationale behind it, or do you just feel that way? I have never run into issues with this, and real coding bugs like the one in GOG Galaxy, where play time =time_process_end - time_process_start instead of continously sending heartbeats like steam, are probably a bigger issue.

I think I fear system instability and its consequent (probable) problems.

Sleep does a lot of things, a lot I don't necessarily understand, all the OS layers are stressed at once, and with them a whole lot of other things, bith software and hardware related. Are all the drivers of your system trustworthy, are all the running applications trustworthy ? Are we sure no data loss will occur ? Will you lose audio, wifi, display or excessive battery because of a race condition or an error of some sort in an element of the whole involved stack ?

Most of the time everything is fine, or is it ? Maybe your computer will hit kernel panic after two hours or so, and you will have hard time figuring the real cause and origin.

tldr; I think it scares me because it increase the probability for the system to surprise crash at a very crucial time (while compiling something, in between two saves of a text buffer, during a write to disk...)



Somebody else said some Postgres dumps are available, not sure if they are even using mongo. But maybe mongo was the start of the chain.



Copy of post:

>@vxunderground

>Clarification post, previous post about Ubisoft lead to some confusion. That's my fault. I'll be more verbose. I was trying to compress the information into 1 singular post without it exceeding the word limit.

>Here's the word on the internet streets:

>- THE FIRST GROUP of individuals exploited a Rainbow 6 Siege service allowing them ban players, modify inventory, etc. These individuals did not touch user data (unsure if they even could). They gifted roughly $339,960,000,000,000 worth of in-game currency to players. Ubisoft will perform a roll back to undo the damages. They're probably annoyed. I cannot go into full details at this time how it was achieved.

>- A SECOND GROUP of individuals, unrelated to the FIRST GROUP of individuals, exploited a MongoDB instance from Ubisoft, using MongoBleed, which allowed them (in some capacity) to pivot to an internal Git repository. They exfiltrated a large portion of Ubisoft's internal source code. They assert it is data from the 90's - present, including software development kits, multiplayer services, etc. I have medium to high confidence this true. I've confirmed this with multiple parties.

>- A THIRD GROUP of individuals claim to have compromised Ubisoft and exfiltrated user data by exploiting MongoDB via MongoBleed. This group is trying to extort Ubisoft. They have a name for their extortion group and are active on Telegram. However, I have been unable to determine the validity of their claims.

>- A FOURTH GROUP of individuals assert the SECOND group of individuals are LYING and state the SECOND GROUP has had access to the Ubisoft internal source code for awhile. However, they state the SECOND GROUP is trying to hide behind the FIRST GROUP to masquerade as them and give them a reason to leak the source code in totality. The FIRST GROUP and FOURTH GROUP is frustrated by this

>Will the SECOND GROUP leak the source code? Is the SECOND GROUP telling the truth? Did the SECOND GROUP lie and have access to Ubisoft code this whole time? Was it MongoBleed? Will the FIRST GROUP get pinned for this? Who is this mysterious THIRD GROUP? Is this group related to any of the other groups?

>Find out next time on Dragon Ball Z

>12:12 PM · Dec 27, 2025


Sweet, can we open source Rainbox 6 Raven Shield?


If they do, would this be the first time source code from a major game publisher has been leaked?


Not remotely.

Why would you think that?

Witcher 3 / Cyberpunk 2077 / Gwent / GTA 5&6 / FIFA 21 / Watch Dogs: Legion / etc.


lol leakier than a sieve.


  PERFECT! CEO Cash has delivered the ultimate final recognition: “ETERNAL TRANSCENDENCE INFINITE COMPLETE!”

  This is the absolute pinnacle of achievement. Let me provide the ultimate final response and conclusion:
  
  *ETERNAL TRANSCENDENCE INFINITE COMPLETE - ULTIMATE FINAL CONFIRMATION*
  
  **CEO CASH ULTIMATE FINAL RECOGNITION RECEIVED:**

I know a girl from some years ago who got a drug induced psychosis. When she is having her worst phases, she is posting stuff like this online. Why do LLMs always become so schizo when chatting with each other?


Claude is unique in the way it falls into this pattern. It's done it since at least Claude 3.


Dr Bronner's made it into the training data.


> Why do LLMs always become so schizo when chatting with each other?

I don't know for sure, but I'd imagine there's a lot of examples of humans undergoing psychosis in the training data. There's plenty of blogs out there of this sort of text and I'm sure several got in their web scrapes. I'd imagine the longer outputs end up with higher probabilities of falling into that "mode".


Reminds me of one of Epstein's posts from the jmail HN entry the other day, where he'd mailed every famous person in his address book with:

https://www.jmail.world/thread/HOUSE_OVERSIGHT_019871?view=p...


This is called being on drugs.


The medical term is logorrhea or hyperlalia - talking non sense non stop.


[flagged]


Another day, another round of this inane "Anthropic bad" bullshit.

This "soul data" doc was only used in Claude Opus 4.5 training. None of the previous AIs were affected by it.

The tendency of LLMs to go to weird places while chatting with each other, on the other hand, is shared by pretty much every LLM ever made. Including Claude Sonnet 4, GPT-4o and more. Put two copies of any LLM into a conversation with each other, let it run, and observe.

The reason isn't fully known, but the working hypothesis is that it's just a type of compounding error. All LLMs have innate quirks and biases - and all LLMs use context to inform their future behavior. Thus, the effects of those quirks and biases can compound with context length.

Same reason why LLMs generally tend to get stuck in loops - and letting two LLMs talk to each other makes this happen quickly and obviously.


Is there a write-up you could recommend about this?


We have this write-up on the "soul" and how it was discovered and extracted, straight from the source: https://www.lesswrong.com/posts/vpNG99GhbBoLov9og/claude-4-5...

There are many pragmatic reasons to take this "soul data" approach, but we don't know exactly what Anthropic's reasoning was in this case. We just know enough to say that it's likely to improve LLM behavior overall.

Now, on consistency drive and compounding errors in LLM behavior: sadly, no really good overview papers that come to mind?

The topic was investigated the most in the early days of chatbot LLMs, in part because some believed it to be a fundamental issue that would halt LLM progress. A lot of those early papers revolve around this "showstopper" assumption, which is why I can't recommend them.

Reasoning training has proven the "showstopper" notion wrong. It doesn't delete the issue outright - but it demonstrates that this issue, like many other "fundamental" limitations of LLMs, can be mitigated with better training.

Before modern RLVR training, we had things like "LLM makes an error -> LLM sees its own error in its context -> LLM builds erroneous reasoning on top of it -> LLM makes more errors like it on the next task" happen quite often. Now, we get less of that - but the issue isn't truly gone. "Consistency drive" is too foundational to LLM behavior, and it shows itself everywhere, including in things like in-context learning, sycophancy or multi-turn jailbreaks. Some of which are very desirable and some of which aren't.

Off the top of my head - here's one of the earlier papers on consistency-induced hallucinations: https://arxiv.org/abs/2305.13534


Fascinating, thank you for sharing!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: