Hacker Newsnew | past | comments | ask | show | jobs | submit | crote's commentslogin

To be more accurate: the same business anyone who isn't a Native American has in the US.

It has a decent bunch of thermal mass, so it takes quite a long time for it to reach air temperature during a cold snap or heat wave. This makes it a decent heat source during the winter and cold source during the summer - especially for short-term peaks.

You could get an even better result using the earth itself, but that is way harder to scale.


> The pilots of both aircraft are supposed to be keeping a constant visual watch for traffic.

How's that supposed to work with Instrument Flight Rules, for which you literally train by wearing glasses which block your view outside the window [0]? And how are you supposed to spot an airplane coming at you with a closing speed of 1000 mph (1600 kmh)? It'll go from impossible-to-see to collision in a few seconds - which is why you won't see any "they didn't look outside the window enough" in the report of accidents like Gol Transportes Aéreos Flight 1907.

The whole point of Air Traffic Control is to control air traffic. Sure, there's plenty of uncontrolled airspace where you do indeed have to look out for traffic, but it's uncontrolled precisely because it rarely if ever sees commercial traffic.

[0]: https://www.sportys.com/jeppshades-ifr-training-glasses.html


> How's that supposed to work with Instrument Flight Rules, for which you literally train by wearing glasses which block your view outside the window [0]?

If you're wearing "foggles" (the technical term is a "view limiting device"), you're legally required to have a safety pilot who is responsible for maintaining visual watch.

You never, ever wear those while flying solo.

> And how are you supposed to spot an airplane coming at you with a closing speed of 1000 mph (1600 kmh)?

First, this near-miss was with a refueling tanker, which only travels at normal large-jet speed and is quite large.

If it was a fighter jet, you're right, it would be very hard to see. But frankly, compared to a fighter jet, everyone else might as well be a stationary object in the sky in terms of speed and maneuverability - so you're just relying on the fighter jet not to hit you. (They also have onboard primary radar and other fancy toys - so you hope they have more situational awareness of non-participating aircraft.)

> The whole point of Air Traffic Control is to control air traffic. Sure, there's plenty of uncontrolled airspace where you do indeed have to look out for traffic, but it's uncontrolled precisely because it rarely if ever sees commercial traffic.

Most airspace below 18,000 feet is still "controlled airspace", even though you have to look out for traffic - including commercial traffic. The big jets don't like to stay down there any longer than they have to, but that doesn't mean they're not there.

Being on an IFR clearance only guarantees that you're deconflicted with other IFR traffic. There's always the risk that there's non-participating traffic, especially in visual conditions (VMC). Class A airspace and transponder-required airspace help reduce this risk, but it's never completely eliminated.

Also, more importantly: The military largely plays by their own rules, entirely outside of the FAA.


That's a workstation board, not a regular consumer board, and it is over 5 years old by now - it has even been discontinued by Supermicro.

Buiding a new system with that in 2025 would be a bit silly.


The thing is, what's the market for them?

If you care even remotely about speed, you'll get an NVMe drive. If you're a data hoarder who wants to connect 50 drives, you'll go for spinning rust. Enterprise will go for U.3.

So what's left? An upgrade for grandma's 15-year-old desktop? A borderline-scammy pre-built machine where the listed spec is "1TB SSD" and they used the absolute cheapest drive they can find? Maybe a boot drive for some VM host?


Cheaper, sturdier, and more easily swappable than NVME while still being far faster than spinning discs. I use them basically as independent cartridges, this one's work, that one's a couple TB of raw video files plus the associated editor project, that one has games and movies. I can confidently travel with 3-4 unprotected in my bag.

There's probably a similar cost usb-c solution these days, and I use a usb adapter if I'm not at my desktop, but in general I like the format.


Did that for a while until I invested in a NAS... at that point those early SSDs became drives for my RPi projects, which worked well enough until I gave all my RPi hardware away earlier this year... those 12+yo SSD drives still running without issue.

Where do you add more storage after you've used your 1-2 nvme slots and the m.2?

I would think an SSD is going to be better than a spinning disc even with the limits of sata if you want to archive things or work with larger data or whatever


Counterpoint: who needs that much fast storage?

4 M.2 NVMe drives is quite doable, and you can put 8TB drives in each. There are very few people who need more than 32TB of fast data access, who aren't going to invest in enterprise hardware instead.

Pre-hype, for bulk storage SSDs are around $70/TB, whereas spinning drives are around $17/TB. Are you really willing to pay that much more for slightly higher speeds on that once-per-month access to archived data?

In reality you're probably going to end up with a 4TB NVMe drive or two for working data, and a bunch of 20TB+ spinning drives for your data archive.


You can actually get a decent 4TB USB-C drive from Samsung. For most home users those are fast and big enough. If you get a mac, the SSD is soldered on the main board typically. And you can get up to 8TB now. That's a trend that some other laptop builders are probably following. There's no need for separate SATA drives anymore except for a shrinking group of enthusiast home builders.

I have a couple of 2TB USB-C SSDs. I haven't bought a separate SATA drive in well over a decade. My last home built PC broke around 2013.


Only SATA made it common for motherboards or adapters to support more than 2-4 hard drives. We're back to what we used to do before SATA: when you're out of space you replace the smallest drive with something larger.

There are SATA SSD enclosures for M.2 drives. Those are cheap enough now that granny can still upgrade her old PC on the cheap.

Link? An adapter allowing a M.2 SATA SSD to be used in a 2.5" SATA enclosure is cheap and dead simple: just needs a 5V to 3.3V regulator. But that doesn't help. Connecting a M.2 NVMe SSD to a SATA host port would be much more exotic, and I don't recall ever hearing about someone producing the silicon necessary to make that work.

pcie expansion cards? SATA isn’t free and takes away from having potentially more PCIE lanes, so the only real difference here is the connector

PCIE expansion card with m2 slots?

(SSDs are "fine", just playing devil's advocate.)


> Maybe a boot drive for some VM host?

Actually that's a really common use - I've bought a half dozen or so Dell rack mount servers in the last 5 years or so, and work with folks who buy orders of magnitude more, and we all spec RAID0 SATA boot drives. If SATA goes away, I think you'll find low-capacity SAS drives filling that niche.

I highly doubt you'll find M.2 drives filling that niche, either. 2.5" drives can be replaced without opening the machine, too, which is a major win - every time you pull the machine out on its rails and pop the top is another opportunity for cables to come out or other things to go wrong.


M.2 boot drives for servers have been popular for years. There's a whole product segment of server boot drives that are relatively low capacity, sometimes even using the consumer form factor (80mm long instead of 110mm) but still including power loss protection. Marvell even made a hardware RAID0/1 controller for NVMe specifically to handle this use case. Nobody's adding a SAS HBA to a server that didn't already need one, and nobody's making any cheap low-port-count SAS HBAs.

Anything later than and including x4x has M.2 BOSS support and in 2026 you shouldn't buy anything lower than 14th gen. But yes, cheap SSDs serve well as the ESXi boot drives.

I bought 2 of the 870 QVOs a few years ago and put them in software RAID 0 for my steam library. They cost significantly less per TB than the M.2 drives at the time.

I don't think there are any consumer boards which support this?

In practice you can put 4 drives in the x16 slot intended for a GPU, 1 drive each in any remaining PCIe slots, plus whatever is available onboard. 8 should be doable, but I doubt you can go beyond 12.

I know there are some $2000 PCIe cards with onboard switches so you can stick 8 NVMe drives on there - even with an x1 upstream connection - but at that point you're better off going for a Threadripper board.


The path is obvious, is it not?

Having two independent cellular modems in a car is obviously silly, so it only makes sense to use the same module both for the mandatory emergency calling and for the telemetry.

Because the emergency calling is mandatory, it'll of course be made impossible to disable the modem - and by extension the telemetry. Oh, you disabled the telemetry? I bet that'll be called "tampering with safety equipment", and your insurance is now void, and your car is no longer road legal.

If the law doesn't mandate that eCall has to be fully independent, it'll 100% be used to spy on you.


> Having two independent cellular modems in a car is obviously silly

They should put mandating exactly that into the law.


But the EU can just (and maybe already does) mandate that such telemetry must be opt-in by the user, and on top of that the data collected that way must be treated accordingly to the GDPR anyway.

The RP2xxx also comes with excellent documentation and libraries. If anything, with the drag-n-drop flashing it is even easier to work with than an Arduino.

>The RP2xxx also comes with excellent documentation and libraries

Are they more in number and easier to use than the Arduino libraries?

>If anything, with the drag-n-drop flashing it is even easier to work with than an Arduino.

Why do you think the Arduino is more difficult than "drag-n-drop flashing" by comparison? Do you think one click is more difficult?


From a practical end user perspective, being able to buy a device, and download and install binaries onto it to make it perform a specific purpose by plugging it in and dragging the file over, is considerably easier than installing an IDE, and downloading compiling and installing from source.

Look at how Ben Eater built and set up the SIDKPico to serve as a SID audio chip in his 8 bit breadboard computer here: https://www.youtube.com/watch?v=nooPmXxO6K0


> Are they more in number and easier to use than the Arduino libraries?

It's not either/or, beyond what's in the native SDK RP2 boards also benefit from the Arduino ecosystem via the excellent and well maintained https://github.com/earlephilhower/arduino-pico


> Are they more in number and easier to use than the Arduino libraries?

I haven't done a direct comparison, but considering that the hobbyist ecosystem (which is the main source of those libs) is shifting over, it is just a matter of time.

> Why do you think the Arduino is more difficult than "drag-n-drop flashing" by comparison?

Because you need to install an IDE and mess around with things like serial drivers - and it gets a lot more complicated if you ever have to flash a bootloader. It's not hard, but it's definitely not as trivial as the RP2xxx's drag-n-drop.


The flipside of this is that the RP2xxx has rather poor hard IP, and the PIO is not quite powerful enough to make up for it.

They are great for basic hobbyist projects, but they just can't compare to something like an STM32 for more complicated applications.

They are a pleasure to work with and I think that they are great MCUs, but every time I try to use them for nontrivial applications I end up being disappointed.


STM32 is great!

> nontrivial applications

Out of curiosity, where do you find that you’re hitting the limits of what it can handle?


To give a very basic example: its times can't do input capture. This means you have no easy way to do high-accuracy pulse time measurement. Compare the two datasheets, and the STM33's timers literally have orders of magnitude more features.

Only having two UARTs can be limiting - and PIO is a no-go if you want offloaded parity checking and flow control. The PIO doesn't have an easy external clock input. No CAN or Ethernet makes usage in larger systems tricky. There's no USB Type-C comms support. Its ADC is anemic (only 4 channels, with 36 io pins?). There are no analog comparators. It doesn't have capacitive touch sensing. There's no EEPROM.

None of them are direct dealbreakers and you can work around most of them using external hardware - but why would you want to do so if you could also grab a MCU which has it fully integrated already?


Thank you for the really detailed reply.

>This means you have no easy way to do high-accuracy pulse time measurement

is 2.5ns (https://github.com/gusmanb/logicanalyzer) to 3.3ns (https://github.com/schlae/pico-dram-tester) resolution not enough for you?


That is exactly the problem: you need to use PIO to constantly read the pins, and analyze the bitstream in software. At high speeds this takes up a substantial fraction of your compute resources, and it makes any kind of sleep impossible.

On a STM32 you can just set up the timer and forget about it until you get a "hey, we saw a pulse at cycle 1234" interrupt. The two are not the same.

My argument wasn't "this is completely impossible", but "this is needlessly complicated".


You can buy custom RP2040 boards and attach GPS. My projects are paired with an Si5351A and a 0.5 ppm TCXO. GPS gets you 1PPS

Yes, but the goal was "accurate capture of timer count on input pulse", not "get a 1PPS pulse somewhere on your board".

Agreed; RP2040 doesn’t have true timer input-capture like STM32 (no CNT->CCR latch on edge). That criticism is fair.

What Pico/RP2040 projects do instead is use a PIO state machine clocked from the system clock to deterministically timestamp edges (often DMA’d out). It avoids ISR latency and gives cycle-accurate edge timing relative to the MCU clock. It’s not a built-in capture peripheral, but it achieves the same practical result.

If you want a drop-in hardware capture block with filtering and prescalers, STM32 is the better choice. RP2040 trades fixed peripherals for a programmable timing fabric.


They're also very poor value for money if you need millions of them.

There are similar chips at a quarter of the price.

Obviously for hobbyist stuff, $1 doesn't really matter.


Can you give an example of a chip with software-defined IO coprocessors that is 1/4 the price? The pricing I’m getting on the RP2350 is 0.6EUR per chip.

When I’ve compared to other dual-core SoCs with programmable IO, like NXP with FlexIO (~€11) or ESP32 chips with RMT (~€1) they are much more expensive than the RP2350.. is there a selection of programmable IO chips I’m missing?


That's the thing: with proper dedicated peripherals you don't need the software-defined coprocessors.

Sure, they are great if you want to implement some obscure-yet-simple protocol, but in practice everyone is using the same handful of protocols everywhere.

Considering its limitations, betting on the PIO for crucial functionality is a huge risk for a company. If Raspberry Pi doesn't provide a well-tested library implementing the protocol I want (and I don't think they do this yet), I wouldn't want to bet on it.

I think they are an absolutely amazing concept in theory, but in practice it is mostly a disappointment for anything other than high-speed data output.


In Cortex M33 land $15 will get you an entire NXP (or STM) dev board. An MCX-A156 will set you back about $5 which is about on par with an STM32H5. You can go cheaper than that in the MCX-A lineup if you need to. For what I'm working on the H5 is more than enough so I've not dug too deep into what NXP's FlexIO gives you in comparison. Plus STM's documentation is far more accessible than NXP's.

Now the old SAM3 chip in the Arudino Due is a different beast. Atmel restarted production and priced it at $9/ea. For 9k. Ouch. You can get knockoff Dues on Aliexpress for $10.

Edit: I'm only looking at single core MCUs here. The MCX-A and H5 lineups are single-core Cortex M33 MCUs. The SAM3 is a single core Cortex M3. The RP units are dual core M33. If the RP peripherals meet your needs I agree that's a great value (I'm seeing pricing of $1+ here).

Edit2: For dual core NXP is showing the i.MX RT700 at around $7.


People are discussing Arduino alternatives, so yes, we are firmly within hobbyist territory.

That's true in general, but people do use these hobbyist boards as an alternative to a manufacturer dev board when prototyping an actual product.

It's reasonably common in the home automation space. A fair few low volume (but still commercial nevertheless) products are built around ESP32 chips now because they started with ESPHome or NodeMCU. The biggest energy provider in the UK (Octopus) even have a smart meter interface built on the ESP32.


If I understand correctly, a big problem is that the calculation isn't embarrasingly parallel: the various chunks are not independent, so you need to do a lot of IO to get the results from step N from your neighbours to calculate step N+1.

Using more smaller nodes means your cross-node IO is going to explode. You might save money on your compute hardware, but I wouldn't be surprised if you'd end up with an even greater cost increase on the network hardware side.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: