"Ubuntu users of the 64-bit x86 architecture (aka, amd64) can expect updated kernels by the original January 9, 2018 coordinated release date, and sooner if possible."
Right - for desktop use though, there are Firefox and Chrome updates with mitigation. JavaScript exploits were the most dangerous desktop scenario.
For servers running Ubuntu, what is the risk, as long as my services don't run arbitrary user uploaded executables? As far as I can tell it is that a different remote code execution exploit can now read the entire memory, possibly leaking secrets. Assuming we have a kernel update in the next few days, I would need to install it immediately and rotate passwords and keys. Should I revoke TLS certs? Is that paranoid?
I think it's naive to think you're completely protected just because code isn't supposed to ever run. It seems as though the simplest and safest piece of mind is to use some extra layers of protection ala SELinux.
This won't stop the memory from being accessed, but it has a better chance of stopping things that can exploit the bug(s) in the first place.
Revoking TLS certs is probably a little bit on the side of paranoia.
I think you're on the right track -- just watch for the kernel update, and rotate passwords plus keys if it's not a hassle.
It’s naive to assume that your system is perfectly secure in preventing unauthorized code from running. But that’s just in general.
He is right, though. It would take two vulnerabilities to pwn him: one allowing remote code execution and then another (Spectre/Meltdown) to gain access to privileged data that shouldn’t be available in that context.
Too many machines put on too many hats. A single (physical) “secure” server should do as little as possible and run as small a codebase as possible. And never run - sandboxes or otherwise - code that isn’t authorized.
We seem to be forgetting that in all this. If you only run code you trust, you are safe. This can only happen if you run in trusted code on your machine. We’ve taken running untrustworthy code in a “sandboxed” environment to mean “not running untrusted code, when it’s totally not the case.
Things are definitely getting lost in the panic here. It's going to take several weeks for everyone to get their head on straight, but yes, PTI is only going to be justified in certain situations, and if you don't allow untrusted code to run on your system (most servers), you will probably be fine with just your host CPU patched because no one will get the chance to run the exploit.
Of course, if an attacker uses a remote execution vulnerability to get into the box with user-constrained permissions, this can be used to read guest memory without concern for user limitation, so in that way it's a long-term pernicious threat that will make local exploitation significantly easier, and security-conscious organizations will still opt to use it despite the fact that they don't run untrusted code. Also, since this would allow them to read all memory in the guest, if you have sensitive stuff like database credentials coming into memory, they could be sniffed without requiring further exploitation.
Also, consider that at present, PTI is disabled by default for AMD chips, based on AMD's assurances that Meltdown does not affect them. If you're running in the cloud and your host is AMD-based, you don't need PTI in either the guest or the host.