Human changes to ecosystems have altered the range of many bird species. Not just climate -- farming, ranching, housing, and recreational land uses tend to dramatically cause changes to bird ranges.
It's the other way around. SLI falling out of fashion is why there are no consumer boards with multiple x16 slots. There's no longer any demand for it on the consumer side, so the CPU vendors only provide lots of PCIe lanes for expensive chips.
On the server side, seven x16 slot motherboards exist.
Which there are in some places. Where I grew up I'd watch the ships sail into and out of the oil and gas terminals, always accompanied by tugs. More than one in case there's a tug failure.
>Seems to me the only effective and enforceable redundancy that can be easily be imposed by regulation would be mandatory tug boats.
Way it worked in Sydney harbour 20+ years ago when I briefly worked on the wharves/tugs, was that the big ships had to have both local tugs, and a local pilot who would come aboard and run the ship. Which seemed to me to be quite an expensive operation but I honestly cant recall any big nautical disasters in the habour so I guess it works.
But here's the gist: sometimes you have an object you want to copy, but then abandon the original. Maybe it's to return an object from a function. Maybe it's to insert the object into a larger structure. In these cases, copying can be expensive and it would be nice if you could just "raid" the original object to steal bits of it and construct the "copy" out of the raided bits. C++11 enabled this with rvalue references, std::move, and rvalue reference constructors.
This added a lot of "what the hell is this" to C++ code and a lot of new mental-model stuff to track for programmers. I understand why it was all added, but I have deep misgivings about the added complexity.
I find that this can reduce overall complexity. It makes it possible to use objects that can not be copied (such as a file descriptor wrapper) and moving can in most cases not fail. Without move semantics you'd have to use smart pointers to get similar results but with extra overhead.
As someone with CKD and scheduled for an MRI, this was anxiety-inducing.
The Cleveland Clinic has a good overview[1]. Since there have been no reports of NSF in 15 years, I don't think it's rational to avoid MRIs based on gadolinium retention concerns.
This is asking too much. The management of trim, reallocation, wear leveling, and so much more is very complex. It's a full software stack hiding behind the abstraction of NVMe. Every manufacturer is running a different stack with different features and tradeoffs. The "stats" the author is asking for would be entirely different between manufacturers, and I doubt there is that much to be gained from peering behind the curtain.
It has been done previously for CPUs, which are much more complex than SSDs. Why couldn’t each manufacturer expose whatever performance metrics there are, in whichever way they want (as the post argues, eg., through SMART), and then let system engineers exploit this information to optimize their use-cases?
Seems like a poor example since CPU performance metrics differ not only between ISAs, and between vendors of one ISA (AMD vs. Intel, for example) but also between items from a single vendor. There's a 1000-page PDF that tries to explain what all the Intel PMU counters mean on different CPUs and it's full of errors and omissions as well.
Yes, but these differences don’t really matter. There are multiple techniques that system engineers can use to perform both variable selection and regularization (to help with differences across multiple architectures) to help them select counters that matter for their specific use cases.
But then saying “it is too much to ask” is just another way to limit what user can do with the specific resources they paid for.
The abstraction is the problem. Get rid of the translation layer, manage flash directly in the operating system, and suddenly the ambiguity dissolves. You would get meaningful, uniform statistics with semantics necessarily matching those used by your operating system.
Do I really want my relatively expensive general-purpose CPU to be burdened with the task of managing flash using software, when a relatively inexpensive ASIC does that job very quickly and efficiently?
There's a lot of non-trivial stuff that goes on inside of a modern SSD. And to be sure, none of it is magic; all of it could certainly be implemented in software.
But is that kind of drastic move strictly necessary in order to get meaningful statistics?
You don't need me or anyone else to tell you that you're free to call it whatever you want.
I'm going to keep referring to the QuickSync video encoding block in my CPU as "hardware," though, because the tiny lump of transistors that is dedicated to performing this specialized task is something that I can kick.
Relatedly, the business of managing raw NAND storage on Apple devices and abstracting it to operating system software as NVMe: That translation happens in hardware. That hardware is also something that I can kick, so I'm going to keep calling it "hardware".
QuickSync isn't analog to an SSD controller. One is a specialized IP block that handles video streams, other is a generic ARM or RISC core running a specific software for handling low level NAND operations.
One of the consequences of being part of administration that lies constantly is that it is very difficult to trust they are telling the truth. Since this is based on the Interior Department saying something very different than the company, I'm disinclined to give the benefit of the doubt to the Interior.
(They have been renamed to "Canada Jay," but that's a hilarious story for another day)