Telling people to write in kernel mode if they care about performance isn't realistic. For most people that would mean completely rewriting their code from scratch, foregoing high-level software stacks and languages, giving up on most databases, giving up on all manner of tools and techniques for high-velocity software development, giving up fault tolerance, dealing directly with fiddly hardware issues (when do I need a TLB shootdown?), etc.
Whereas disabling spectre mitigations is a one-line config change.
For use cases where local system security really doesn't matter (of which there are a lot, let's be honest), a one-line config change for a 25% (or whatever it is now) performance boost is a pretty damned good deal.
I'm not sure I agree that cases where local system security really doesn't matter and performance matters are that plentiful, but I am happy to be convinced otherwise. In particular, just about any personal computing context doesn't count - you'd have to not run mutually-untrusted third-party code. That rules out web browsers with JavaScript, that rules out Android/iOS-style independent apps, etc. Sure, if you use the web without dynamic content and you use local office suites you're fine, but on the other hand, you don't really care about performance - a 486 will deliver enough performance to read textual content and run a word processor and spreadsheet.
Gaming is a context where you care about performance and you aren't using multiple apps at once, but (and I admit this is a bit of a naive guess) I'd be surprised if it's syscall-bound. It seems like performance is likely to be I/O-bound (getting assets from disk into memory), CPU-bound, and GPU-bound, but are you really making large numbers of syscalls? (Maybe this matters in online gaming?)
So that leaves basically some specific server workloads, and at that point I think some of these techniques start to be realistic. Pinning your work onto a core and using kernel-bypass networking is a pretty straightforward technique these days. It's not quite as easy as using the kernel interfaces, but it's pretty close, and it's definitely worth investing some engineering effort into if you care about performance - you can get much more than 25% speedups.
I agree that writing in kernel mode is generally unrealistic (although if you're writing a kernel module for Linux, you still don't need to care about fiddly hardware issues - you've got the rest of Linux still running). Mostly I'd like to see more work like the paper I linked - there should be a standard build of Linux which has hardware privilege separation turned off for use in the cases where you actually can avoid hardware privilege separation (single-user VMs on cloud hosts, single-user data crunching machines, dedicated single-tenant database servers, game consoles without web browsers, ebook readers without web browsers, etc.), or at least a flag to spawn a process and leave it in ring 0. If the use cases are plentiful, this seems like it would be valuable for lots of people - and it'd also make it clear that this generally isn't an option you want on personal computers. (But I think the reason this hasn't been done in the last several decades is that there aren't actually that many use cases that are both genuinely single-user and syscall-bound.)
If you think a 486 is sufficient for reading textual content and running a word processor and spreadsheet, you haven't been paying attention to software bloat. A 486 would have a hard time just booting a modern OS, never mind the application software.
> So that leaves basically some specific server workloads,
The vast majority of servers don't run any untrusted code. Servers tend to do lots of syscalls for network I/O.
> Gaming is a context where you care about performance and you aren't using multiple apps at once, but (and I admit this is a bit of a naive guess) I'd be surprised if it's syscall-bound.
I would expect that interfacing with the GPU involves a fair number of syscalls -- but admittedly I'm also guessing.
> single-user VMs on cloud hosts, single-user data crunching machines, dedicated single-tenant database servers, game consoles without web browsers, ebook readers without web browsers
This is a lot of cases. I'd love to get 25% perf back on postgres, or 25% back on my air-gapped DAW, etc. etc.
Benchmark it - your air-gapped DAW is almost certainly spending very little of its time making system calls, and depending on workload, your Postgres probably isn't either. You'll get 25% back on syscall-heavy workloads but your workloads probably aren't syscall-heavy.
While we're being honest, how many programs that get written are so desperate for performance that the only thing left to do is turn off security? And are the people who are able to even make this determination the kind of people for whom making a kernel module is unrealistic?
I didn't ignore it. My point is, the kind of people who truly need this kind of performance are already translating hotspots to assembler and so on, a kernel module is plenty practical for them. For nearly all computer users there is no excuse for turning off these mitigations.
I don't know. If I'm running an application server with only my own code on a dedicated server, and I can flip some switches to make it go faster, then that's pretty nice, no? Might save me from upgrading to a bigger (pricier) server. What am I missing?
I mean, sure, that site is nuts, it sorely needs documentation. But not every scenario needs Spectre protection.
The problem with the scenario you describe is: how will you ensure that no one ever forgets that this server is vulnerable and can never be used for certain things? And everyone on here advocating turning off the mitigations is assuming the only exploits are the ones we know about. But when has that ever been the case. If more people turn off the mitigations black hats will be invested in finding ways to exploit it we haven't realised before.
Telling people to write in kernel mode if they care about performance isn't realistic. For most people that would mean completely rewriting their code from scratch, foregoing high-level software stacks and languages, giving up on most databases, giving up on all manner of tools and techniques for high-velocity software development, giving up fault tolerance, dealing directly with fiddly hardware issues (when do I need a TLB shootdown?), etc.
Whereas disabling spectre mitigations is a one-line config change.
For use cases where local system security really doesn't matter (of which there are a lot, let's be honest), a one-line config change for a 25% (or whatever it is now) performance boost is a pretty damned good deal.