I think the mini is just a better value, all things considered:
First, a 16GB RPi that is in stock and you can actually buy seems to run about $220. Then you need a case, a power supply (they're sensitive, not any USB brick will do), an NVMe. By the time it's all said and done, you're looking at close to $400.
I know HN likes to quote the starting price for the 1GB model and assume that everyone has spare NVMe sticks and RPi cases lying around, but $400 is the realistic price for most users who want to run LLMs.
Second, most of the time you can find Minis on sale for $500 or less. So the price difference is less than $100 for something that comes working out of the box and you don't have to fuss with.
Then you have to consider the ecosystem:
* Accelerated PyTorch works out of the box by simply changing the device from 'cuda' to 'mps'. In the real world, an M5 mini will give you a decent fraction of V100 performance (For reference, M2 Max is about 1/3 the speed of a V100, real-world).
* For less technical users, Ollama just works. It has OpenAI and Anthropic APIs out of the box, so you can point ClaudeCode or OpenCode at it. All of this can be set up from the GUI.
* Apple does a shockingly good job of reducing power consumption, especially idle power consumption. It wouldn't surprise me if a Pi5 has 2x the idle draw of a Mini M5. That matters for a computer running 24/7.
> Lastly, who creates the list of forbidden parts? How will it be curated? And most importantly, how will it be secured that it isn't a set of blueprints which are then used to make firearms?
My conspiracy theory is that these laws (there have been a rash of them lately, and that feels off) are being promoted by some of the cloud-based 3d printer manufacturers. In other words, an attempt at regulatory capture.
As you note, determining from gcode whether the print is a gun is effectively impossible, and hiding the blocklist is hard anyway. Thus, the only way that could possibly work technically is with those cloud printers that take a .STL as input, routed through the printer manufacturer's servers.
Discriminating between "gun" and "not gun" from the .STL is still hard, but vastly easier than inferring from gcode. The blocklist story becomes at least coherent, if still highly suspect, to anyone who knows anything about computer security.
We can create a balanced partitioning of the 300 turkeys with a 300 bit random number having an equal number of 1's and 0's.
Now suppose I randomly pick 300 bit number, still with equal 0's and 1's, but this time the first 20 bits are always 0's and the last 20 bits are always 1's. In this scenario, only the middle 260 bits (turkeys) are randomly assigned, and the remaining 40 are deterministic.
We can quibble over what constitutes an "enormous" bias, but the scenario above feels like an inadequate experiment design to me.
As it happens, log2(260 choose 130) ~= 256.
> Are there any non-cryptographic examples in which a well-designed PRNG with 256 bits of well-seeded random state produces results different enough from a TRNG to be visible to a user?
One example that comes to mind is shuffling a deck of playing cards. You need approximately 225 bits of entropy to ensure that every possible 52 card ordering can be represented. Suppose you wanted to simulate a game of blackjack with more than one deck or some other card game with more than 58 cards. 256 bits is not enough there.
It's an interesting observation and that's a nice example you provided but does it actually matter? Just because certain sequences can't occur doesn't necessarily mean the bias has any practical impact. It's bias in the theoretical sense but not, I would argue, in the practical sense that is actually relevant. At least it seems to me at first glance, but I would be interested to learn more if anyone thinks otherwise.
For example. Suppose I have 2^128 unique playing cards. I randomly select 2^64 of them and place them in a deck. Someone proceeds to draw 2^8 cards from that deck, replacing and reshuffling between each draw. Does it really matter that those draws weren't technically independent with respect to the larger set? In a sense they are independent so long as you view what happened as a single instance of a procedure that has multiple phases as opposed to multiple independent instances. And in practice with a state space so much larger than the sample set the theoretical aspect simply doesn't matter one way or the other.
We can take this even farther. Don't replace and reshuffle after each card is drawn. Since we are only drawing 2^8 of 2^64 total cards this lack of independence won't actually matter in practice. You would need to replicate the experiment a truly absurd number of times in order to notice the issue.
If it had a practical impact, then it would imply that such statistical tests could be used as a distinguisher to attack the RNG. They fail as distinguishers, even with absolutely enormous amounts of data, so the bias is too small to have any influence in any practical experiment. You'd expect to need to observe 2^128 states to detect bias in a 256-bit CSPRNG, which means you'll have to store 2^128 observed states. That's around 10^20 EiB of storage needed. Good luck affording that with drive prices these days!
At a certain point a bias in the prng just has to become significant? This will be a function of the experiment. So I don’t think it’s possible to talk about a general lack of “practical impact” without specifying a particular experiment. Thinking abstractly - where an “experiment” is a deterministic function that takes the output of a prng and returns a result - an experiment that can be represented by a constant function will be immune to bias, while one which returns the nth bit of the random number will be susceptible to bias.
> At a certain point a bias in the prng just has to become significant?
Sure, at a point. I'm not disputing that. I'm asking for a concrete bound. When the state space is >= 2^64 (you're extremely unlikely to inadvertently stumble into a modern PRNG with a seed smaller than that) how large does the sample set need to be and how many experiment replications are required to reach that point?
Essentially what I'm asking is, how many independent sets of N numbers must I draw from a biased deck, where the bias takes the form of a uniformly random subset of the whole, before the bias is detectable to some threshold? I think that when N is "human" sized and the deck is 2^64 or larger that the number of required replications will be unrealistically large.
Suppose you and I are both simulating card shuffling. We have the exact same setup, and use a 256-bit well-behaved PRNG for randomness. We both re-seed every game from a TRNG. The difference is that you use all 256 bits for your seed, while I use just 128 and zero-pad the rest. The set of all shuffles that can be generated by your method is obviously much larger than the set that can be generated by mine.
But again: who cares? What observable effect could there possibly be for anybody to take action if they know they're in a 128-bit world vs a 256-bit one?
The analogy obviously doesn't generalize downwards, I'd be singing a different tune if it was, say, 32 bits instead of 128.
Rent isn't high because of collusion. It's simple supply and demand.
There may be fewer people in manhattan, but that's mostly because fewer people live in each living unit. The same number of living units is being demanded by the market because of evolving living preferences.
If you allow sufficient living units to be built, it doesn't matter how much landlord try to collude, they won't be able to keep rent high. Someone will break when the vacancy rate reaches 15%.
Rent is high due to supply and demand, but collusion lowers supply. Ironically enough, "affordable housing" arrangements and rent-control, which is common in NYC, are examples of such collusion and end up raising rents over time compared to the alternative where the collusion isn't there.
First, adjusted for inflation, new car prices really aren't that different than they were 10-30-50-70 years ago. You have to compare like for like, no cheating comparing a modern luxury car to Ford Pinto. For example the cheapest car in 1970 cost about $2000, with no frills like a radio, passenger wing mirror or floor matts. That's equivalent to about $17000 today. A base Nissan Versa today starts at $18000, yet includes power windows and an A/C.
Second, the maintenance requirements today are much, much lower than in the past. There's a whole list of expensive stuff you just don't have to think about with modern cars until long after those old cars would be at the junk yard (chassis lube, spark plugs, spark plug wires, carb and distributor, wheel bearings etc). That's a lot of labor you don't pay for, to say nothing of the parts!
Third, despite being heavier, more convenient and safer, modern cars have lower fuel consumption. Coming back to our Pinto vs Versa example, the Versa gets at least 50% better fuel economy.
Fourth, cars today just last longer. It used to be a minor miracle when a wasn't rusted out after 10 years or the engine still ran after 100k miles. Today, your car might be still under warranty at that point.
> Why do people try to deny this obvious reality?
Because it is not at all obvious that that is, in fact, reality. It doesn't help to complain about easily-disprovable things like the affordability of cars.
>Because it is not at all obvious that that is, in fact, reality. It doesn't help to complain about easily-disprovable things like the affordability of cars.
Well you can just search "why are cars so expensive" and then you will find dozens of articles like the one below. I'm not American but I have the impression that cars were a kind of milestone in the life of young people in the past and this disappeared due to affordability. How much does it cost to live in a van nowadays? Can a part time fast food worker afford it?
I don't like this hedonistic argument that you used, it sounds like cheating, you risk sounding like the GP saying that houses today that nobody can afford are in fact cheaper because they are less likely to catch fire.
If you compare similar widely sold cars across decades prices are fairly level in constant dollars in the US, at least in the low to maybe mid range. For example when I was buying a new car a little under a year ago I looked at 2025 models of some of my earlier cars.
A 2025 Nissan Sentra was pretty similar in constant dollars to my 1982 Datsun Sentra. A 2025 Honda Civic was pretty close to my 1989 Civic. A 2025 Honda CR-V was pretty close to my 2006 CR-V.
The average new car price now is quite a bit higher in constant dollars than the average new car price decades ago, but that is because preferences have shifted to cars that are at more expensive places in the lineup.
My 2006 CR-V for example was more than my 1989 Civic in constant dollars, but CR-Vs are at a higher price point that Civics. If I had gotten another Civic in 2006 it would have been about the same as my 1989 Civic.
The American media writes articles about what gets clicks not what is true.
If you don't believe the enormous amount of freely available data on the internet. I am American, I had grandparents who were American. Poverty was a whole different beast in the 1930's compared to today.
Yes, it does sound like typical German bureaucracy to make events like death outside the jurisdiction impossible unless the deceased has obtained prior approval to kick the bucket. :)
Well, I do enjoy the layers of protection implemented here. It sounds like you wouldn't?
The record from the land registry includes things like wayrights for third parties, known ground contaminations, utilities/water/power lines etc. -- all very relevant to me as a potential buyer. I did enjoy the notaries explanations of various aspects, which went beyond reading the contract out loud and making sure we verbally understood what we were going to sign. The process also forces both parties to have written copies of everything prior to the final meeting, which provides another chance to let it sink in and potentially reconsider -- which in our case, we did. Also, they're really trained to verify IDs, not like a random clerk in some liquor store.
I understand one can experience it as "bureaucracy" and "annoyance" in their individual case, but then I wonder how much such people consider the bigger picture and what the benefit of all of it really is, for their own and for societies sake, and what kind of shitshow it would turn into if we got rid of all the "bureaucracy" -- such as described in the very blogpost here.
Even if I (wrongly?) assume I am always on top of things and I will not get ripped off ever, only so-called stupid people will, I really don't need more angry people who fell for scams or made quick decisions that they regret or whose identity got 'stolen' around me/on public streets/in bars. If it was for me, we could add even more such layers of protection, which you seem to see only as "(unnecessary) bureaucracy"?
Is that actually true for SSDs? I was under the impression that manufacturers have a speed-capacity tradeoff "knob" they can adjust.
Specifically, that each buried gate can store one bit (SLC), two bits (MLC), three bits (TLC) or even more.
Obviously more bits means closer thresholds, making the gate more susceptible to electrical noise when reading and writing (and process variation in the dopant loading).
It's pretty easy to think up ways to pack in more bits that would slow down the read rate... such as applying multi-level ECC or just waiting longer for the read ADCs to settle.
> Meanwhile it should have less inflationary pressure on domestically produced stuff like housing.
This is pure fantasy. A weak dollar makes it more affordable for foreign capital to buy US assets, yes, including housing. The president himself recently admitted on video that he plans to make house prices rise.
First, a 16GB RPi that is in stock and you can actually buy seems to run about $220. Then you need a case, a power supply (they're sensitive, not any USB brick will do), an NVMe. By the time it's all said and done, you're looking at close to $400.
I know HN likes to quote the starting price for the 1GB model and assume that everyone has spare NVMe sticks and RPi cases lying around, but $400 is the realistic price for most users who want to run LLMs.
Second, most of the time you can find Minis on sale for $500 or less. So the price difference is less than $100 for something that comes working out of the box and you don't have to fuss with.
Then you have to consider the ecosystem:
* Accelerated PyTorch works out of the box by simply changing the device from 'cuda' to 'mps'. In the real world, an M5 mini will give you a decent fraction of V100 performance (For reference, M2 Max is about 1/3 the speed of a V100, real-world).
* For less technical users, Ollama just works. It has OpenAI and Anthropic APIs out of the box, so you can point ClaudeCode or OpenCode at it. All of this can be set up from the GUI.
* Apple does a shockingly good job of reducing power consumption, especially idle power consumption. It wouldn't surprise me if a Pi5 has 2x the idle draw of a Mini M5. That matters for a computer running 24/7.
reply