Hacker Newsnew | past | comments | ask | show | jobs | submit | staplung's commentslogin

I actually did already know that factoid but was struggling (am still) to see how it relates to a wooden trough that merely holds cables.

Another interesting factoid about the catenary: Robert Hooke proved that it takes on the shape (though inverted) of the ideal arch, in terms of supporting loads above it. La Sagrada Familia in Barcelona is filled with them.


> but was struggling (am still) to see how it relates to a wooden trough that merely holds cables.

Overhead Catenary [1] is a standard term, for a system that has two wires overhead - one suspended from the posts (forming a series of catenary curve), the other suspended from that cable at regular intervals (and held level relative to the track). The wood in Boston's system seems to replace the catenary cable.

[1] https://en.wikipedia.org/wiki/Overhead_line#Overhead_catenar...


The Gateway Arch is an inverted catenary structure

https://www.nps.gov/jeff/planyourvisit/materials-and-techniq...


In a nutshell, the overhead power lines hang from their support points as catenary curves.

This is important to the design of trains, because you have to calculate the variance in height over the caternary length (highest at attachment point; lowest at somewhere near the middle, but depending on incline).


Sounds like the answer is no.

"For trivial relocatability, we found a showstopper bug that the group decided could not be fixed in time for C++26, so the strong consensus was to remove this feature from C++26."

[https://herbsutter.com/2025/11/10/trip-report-november-2025-...]


I’m curious what that showstopper bug actually was.

I was really looking forward to this feature, as it would've helped improve Rust <-> C++ interoperability.



Thanks for the link!


MDL is also from MIT and supposedly stood for More Datatypes than Lisp. According to wikipedia "MDL provides several enhancements to classic Lisp. It supports several built-in data types, including lists, strings and arrays, and user-defined data types. It offers multithreaded expression evaluation and coroutines."

Seems that most of it's novelties were eventually added into LISP proper.


The "conflict-free" part of the name is misleading. The conflict "resolution" means having some deterministic algorithm such that all nodes eventually converge to the same state, but it won't necessarily mean that the end state looks like it's conflict-free to a human. The algorithm you choose to implement will determine what happens in the editing case imagined; various answers are possible, perhaps most of which would be classified as conflicting changes by a human who looked at the final result. The pitch for CRDTs is "we won't trouble you with the replication details and will eventually converge all the changes. The tradeoff is that sometimes we'll do the wrong thing."

That tradeoff is fine for some things but not others. There's a reason why git et al require human intervention for merge conflicts.

The article is doing a classic bait-and-switch: start with a motivating example then dodge the original question without pointing out that CRDTs may be a very bad choice for collaborative editing. E.g. maybe it's bad for code and legalese but fine for company-issued blog posts.


I think people who haven't worked on problems like this have much higher expectations than people who have.

If you have worked on problems like this, you're very happy to converge on the same state and have no expectation that multiple concurrent editors will be happy with the result. Or even that one of them will be happy with the result.

You wouldn't use this in a situation like version control where you have to certify a state as being acceptable to one or multiple users.


to add on to that, it is that the resolution is the same regardless of the order in which the nodes get the information that led to the conflict so there is no "out of sync". your resolution strategy could involve considering the potential conflict unresolved until a resolution element is created (but then you have to figure out what to do if you get more than one of those.. its conflicts all the way down!)


Totally rad.

Now you just need and Oculus and you can turn yourself into Johnny Mnemonic.

https://www.youtube.com/watch?v=UzRjtvMQds4&t=63s


"Driving" via the starter motor turns it into an electric car!


"Difficult to tell from this vantage point if they will consume the captive Earth-men or merely enslave them...one thing is for certain: there is no stopping them; the ants will soon be here! And I for one welcome our new insect overlords. Like to remind them that as a trusted TV personality I could be helpful in rounding up others to toil in their underground sugar-caves."

https://www.youtube.com/watch?v=W4jWAwUb63c


It's interesting to note how the West discovered there had been a nuclear accident in the USSR when Chernobyl exploded. The Forsmark reactor in Sweden detected the fallout on the clothing of workers returning to the plant after lunch, IIRC.

Surprised this station seems to post-date that? Seems like it would have been handy to have in the Cold War. Then again, Russia has long had a mining presence on Svalbard so maybe that has something to do with it.


Similarly, in 1984, a truck carrying unexpectedly radioactive steel rebar took a wrong turn into the entrance of Los Alamos National Laboratory and set off radiation detectors at the gate intended to detect radioactive contamination on workers leaving the site.

This triggered an investigation which traced the contamination to the improper disposal of the active element of a retired medical radiotherapy machine that used cobalt-60 - the radioactive cobalt ended up mixed with a large batch of other scrap metal, contaminating (among other things) ~6,600 tons of rebar, much of which had already been shipped at the time this was discovered...

https://en.wikipedia.org/wiki/Ciudad_Ju%C3%A1rez_cobalt-60_c...


Reminded me of this unfortunate incident. The answer to "How much contamination can a radiotherapy machine cause?" is unfortunately quite a bit. https://en.m.wikipedia.org/wiki/Goi%C3%A2nia_accident


> Russia has long had a mining presence on Svalbard

Not only a mining presence [1]: "After the war, the Soviet Union proposed common Norwegian and Soviet administration and military defence of Svalbard. This was rejected in 1947 by Norway, which two years later joined NATO. The Soviet Union retained high civilian activity on Svalbard, in part to ensure that the archipelago was not used by NATO."

[1] https://en.wikipedia.org/wiki/Svalbard#Second_World_War


Wouldn't "high civilian activity" refer primarily to miners?


Probably. Barentsburg, west from Longyearbyen, is predominantly a Russian mining village. Svalbard is interesting in that it is part of Norway but citizens of some other countries are granted more rights than they’d have in the rest of Norway, and Norway also is not allowed to operate its military from Svalbard.


I believe it's basically all countries. There was a deal with other countries through the UN.



In practice it's a visa-free zone. Anyone from anywhere can settle in Svalbard as long as they can get there. Of course, not many want to, so it's a bit academic.


There's was/is also Pyramiden to the north east, also a mining town but closed down in '98


The first anomaly was detected by the Finnish defence forces in Kajaani already the previous day, Sunday evening at 8:40pm local time, but they didn't understand the small deviation was very important and probably suspected it was a minor error in the measurements. Only on Monday at 10am the Nuclear Safety Authority started to investigate properly and published information at 4pm - not early enough to let Forsmark know they wouldn't have needed to evacuate the plant as a precaution.


They did have ways to detect nuclear incidents before then. Vela satellites for example. They seem to have been more tuned for detecting nuclear bombs vs generalized fallout however. Maybe others can speak more towards this.


Yeah, the Vela sats could spot nuclear detonations, but didn't sniff for trace isotopes or anything like that. They were in way too high an orbit for any traces to make it to them anyhow.


The last of the Vela satellites were shut down in 1984 or 1985 (I've found conflicting sources on this) but in any case, before Chernobyl (April, 1986). They were replaced by other systems of course but as others have pointed out, those were never designed to detect fallout. Bhangmeters look for a characteristic double-flash of light from atmospheric nuclear detonations.


The bhangmeters on Vela satellites were to detect atmospheric detonations: they exploit some odd optical characteristics of the fireball.

They did have gamma, neutron, and X-ray detectors too but I’d guess those were also tuned to detect detonations rather than small background changes. That might not be feasible from so high up and it would square with the Velas’ role in discovering gamma-ray bursts.


Maybe they’re expecting more nuclear testing. Or responding to the recent news of the Russian drone attack that hit the Chernobyl containment shield.


It's cool that SSH is getting some love but I'm a little sad they're not being a little more ambitious with regard to new features, considering it seems like they're more or less creating a new thing. Looks like they're going to support connection migration but it would be cool (to me anyway) if they supported some of the roaming/intermittent connectivity of Mosh[1].

1: https://mosh.org/


One of the things I really like about Mosh is the responsiveness - there's no lag when typing text, if feels like you're really working on a local shell.

I'm guessing SSH3 doesn't do anything to improve that aspect? (although I guess QUIC will help a bit, but isn't quite the same as Mosh is it?)


AIUI connection migration (as well as multipath handling) is a QUIC feature. And how would that roaming feature differ from "built-in tmux"? I'm not sure the built-in part there would really be an advantage…


Mosh connections don't drop from merely wifi flipping around; you get replies back to the address and port the last uplink packet came from. You can just continue typing and a switch between Wi-Fi and mobile data (for example on a phone while sitting on public transit) shows as merely a lag spike during which typed characters will be predictive echoed by underlining them after an initial delay that serves to avoid flickering from rapidly retracted/changed predictions (predictions are underlined) during low-latency steady-state.

Mosh is like vnc or rdp for terminal contents: natively variable frame rate and somewhat adaptive predictive local echo for reducing latency perception; think client side cursor handling with vnc or with rdp I'd even assume there might be capability for client-side text echo rendering.

If you haven't tried mosh in situations with a mobile device that have you experience connection changes during usage, you don't know just how much better it is than "mere tmux over ssh".

I honestly don't know of a more resilient protocol than mosh that's in regular usage, other than possibly link-layer 802.11n aka "the Wi-Fi that got these 150 Mbit and those 300 Mbit and some 450 Mbit speed claims advertised onto the marker", where link-layer retransmissions and adaptive negotiation of coding parameters and actively-multipath-exploiting MIMO-OFDM (and AES crypto from WPA2) combine for a setup that hides radio interference to not be visible to higher level protocols beyond the unavoidable jitter of the retransmissions and varying throughput potentials from varying radio conditions.

Oh, I think when viewed regarding computers not the congestion control schemes adjusting the individual connection speeds, there'd also be BitTorrent with DHT and PEX that only needs an infohash: with 160 bits of hash a client seeded into the (mainline) DHT swarm can go and retrieve a (folder of) files from an infohash-specific swarm that's at least partially connected to the DHT (PEX takes care of broadening the connectivity among those that care about the specific infohash).

In the realm of digital coding schemes that are widely used but aren't of the "transmission" variety, there's also Redbook CD audio that starts off easy with lossless error correction, followed by perceptually effective lossy interpolation to cover severe scratches to the disc's surface.


I'm not sure why you're explaining mosh (I know what it is and have used it before), I was asking what there is other than migration (= handled by QUIC) and resumption (= tmux).

Local line editing, I guess. Forgot about that.


I guess my question is: why bother with a benchmark if the pick is pre-ordained? Is it the case that at some point the results would be so lopsided that you would pick the faster solution? If so, what is that threshold? I.e. when does performance trump system simplicity? To me those are the interesting questions.


>why bother with a benchmark if the pick is pre-ordained

Validating assumptions

Curiosity/learning

Enraging a bunch of HN readers who were apparently born with deep knowledge of PG and Redis tuning


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: