Hacker Newsnew | past | comments | ask | show | jobs | submit | LoganDark's commentslogin

That website seems to no useful information; only marketing speak about how great it is... Do you know of a good source on how toroidal propellers work and the engineering behind them?

I didn't even see a picture of the propeller, if there was one. There was a giant, white, blank space.


Uhm... The article lacks quite a few citations.

On iOS Safari the videos are fullscreening themselves as I scroll. I've seen this on other blogs before but I don't know what causes it. Super annoying

Ugh, yeah I had some super weird bugs like this in safari, still haven't found the source :(

Don't quote me on this, but I think there is a "playsinline" / "webkit-playsinline" attribute for the video element you need to add to avoid that, + if it's autoplay you need to set "muted" too. I've also had this happen and I think both/either of those solved it last time.

Remember: Advertisers cry with money.

They’ve proven themselves to be bad actors with no moral compass. No different than street drug dealers, casinos, traffickers, or any other predatory industry. They should’ve regulated as such.

I don’t have any problem with old-timey “Dishsoap Brand Dishsoap sponsored this content. They want you to know that a dish isn’t clean unless it’s Dishsoap clean!” Type ads. Much beyond that should no longer be tolerated.


> I don’t have any problem with old-timey “Dishsoap Brand Dishsoap sponsored this content. They want you to know that a dish isn’t clean unless it’s Dishsoap clean!” Type ads. Much beyond that should no longer be tolerated.

I think the only advertising I've knowingly listened to was a Privacy.com sponsorship on The Modern Rogue. Now been a paying customer for years and they have been mostly great. I think that sponsorship was back in, like, 2015 or 2016. Oh how times have changed.

(I'm sure there are thousands of subconscious influences that I have no idea about, though. Maybe a few radio ads put a brand in my mind for something so I didn't search for alternatives. I don't listen to broadcast radio anymore though.)


I think they had to revert back to libc on macOS/iOS because those have syscall interfaces that truly are not stable (and golang found that out the hard way). I wonder if they had to do the same on BSDs because of syscall filtering.

Indeed, OpenBSD recently added hardening measures and started restricting the generic syscall interface to libc.

> Morg doesn't seem to be a word in English (though it is in Irish!), but it sounds like it should be.

Maybe because English also has 'morgue'.


And started over five hours ago.

> so developers will have to ditch memory-sucking frameworks

Since when have developers ever lowered hardware requirements? Prosumers will just cough up the extra money while casual users will continue to be left in the dust, like they have been for practically the last decade (or longer).


Just blow the right hardware fuses and secure boot will be forced with a key that doesn't (or can't) exist.

There is a subsection of that page that is more relevant: https://en.wikipedia.org/wiki/Tetrachromacy#Tetrachromacy_in...

I also wonder.


Wait, aren't multi-chip modules assembled out of chiplets?

Sort of - there's some distinction in the "multi-chip" vs "multi-chiplet". To some degree though, it's true in the sense of: "A rose by any other name would still smell as sweet".

The explanation by 'kurthr provides some insight as to why we don't use the term MCM for them today, because the term would imply that they're not integrated as tightly as they are today.

The best example of MCM's are probably Intel’s Pentium D and Core 2 Quad. In these older "MCM" designs, generally the multiple chips were mostly fully working chips - they each had their own last-level (e.g. L3) cache (LLC). They also happened to be manufactured on the same lithography node. When a core on Die A needed data that a core on Die B was working on, Die B had to send the data off the CPU entirely, down the motherboard's Front Side Bus (FSB), into the system RAM, and then Die A had to retrieve it from the RAM.

IBM POWER4 and POWER5 MCM's did share L3 cache though.

So parent was 'wrong' that "chiplets" were ever called MCM's. But right that "chips designed with multiple chiplet-looking-things" did used to be called "MCM's".

Today's 'chiplets' term implies that the pieces aren't fully-functioning by themselves, they're more like individual "organs". Functionality like I/O, memory controllers, and LLC are split off and manufactured on separate wafers/nodes. In the case of memory controllers that might be a bit confusing because back in the days of MCM's these were not in the same silicon, rather a separate chip entirely on the motherboard, but I digress.

Also MCM's lacked the kind of high-bandwidth, low-latency fabric for CPU's to communicate more directly with each other. For the Pentiums, that was organic substrates (the usual green PCB material) and routing copper traces between the dies. For the IBM's, that was an advanced ceramic-glass substrate, which had much higher bandwidth than PCB traces but still required a lot of space to route all the copper traces (latency taking a hit) and generated a lot of heat. Today we use silicon for those interconnects, which gives exemplary bandwidth+latency+heat performance.


> So parent was 'wrong' that "chiplets" were ever called MCM's. But right that "chips designed with multiple chiplet-looking-things" did used to be called "MCM's".

No, chiplets were called MCMs. IBM and others as you noted had chip(lets) in MCMs that were not "fully-functioning" by themselves.

> Also MCM's lacked the kind of high-bandwidth, low-latency fabric for CPU's to communicate more directly with each other. For the Pentiums, that was organic substrates (the usual green PCB material) and routing copper traces between the dies. For the IBM's, that was an advanced ceramic-glass substrate, which had much higher bandwidth than PCB traces but still required a lot of space to route all the copper traces (latency taking a hit) and generated a lot of heat. Today we use silicon for those interconnects, which gives exemplary bandwidth+latency+heat performance.

This all just smells like revisionist history to make the name be consistent with previous naming.

IBM's MCMs had incredibly high bandwidth low latency interconnects. Core<->L3 is much more important and latency critical than core+cache cluster <-> memory or other core+cache cluster, for example. And IBM and others had silicon interposers, TSVs, and other very advanced packaging and interconnection technology decades ago too, e.g.,

https://indico.cern.ch/event/209454/contributions/415011/att...

The real story is much simpler. MCM did not have a great name particularly in consumer space as CPUs and memory controllers and things consolidated to one die which was (at the time) the superior solution. Then reticle limit, yield equations, etc., conspired to turn the tables and it has more recently come to be that multi chip is superior (for some things), so some bright spark probably from a marketing department decided to call them chiplets instead of MCMs. That's about it.

Aside, funnily enough IBM actually used to (and may still), and quite possibly others, actually call various cookie cutter blocks in a chip (e.g., a cluster of cores and caches, or a memory controller block, or a PCIe block), chiplets. From https://www.redbooks.ibm.com/redpapers/pdfs/redp5102.pdf, "The most amount of energy can be saved when a whole POWER8 chiplet enters the winkle mode. In this mode, the entire chiplet is turned off, including the L3".


> No, chiplets were called MCMs. IBM and others as you noted had chip(lets) in MCMs that were not "fully-functioning" by themselves.

I don't follow; you seem to be using "chiplet" to directly mean a multi-chip module, whereas I consider "chiplet" to be a component of a multi-chip module. An assembly of multiple chiplets would not itself be "a chiplet", but a multi-chip module. This is also why I don't follow why the term "chiplet" would replace the term "multi-chip module", because to me, a multi-chip module is not even a chiplet, it's only built with chiplets.

Are chiplets ever more than a single die? Conversely, are there multi-chip modules of only a single die? At least one of these must be true for "chiplet" and "multi-chip module" even to overlap.


Sorry I flubbed that -- I meant what are now called chiplets interconnected and packaged together used to be called MCMs. A chip was always a single piece of silicon (aka die), so chiplets used to just be called chips. There was never any rule that chips in an MCM were "standalone" or functional by themselves like some seem to be saying, in fact earlier computers used multiple chips for subsystems of a single CPU (https://en.wikipedia.org/wiki/POWER2 this thing had individual chips for IFU, LSU, ALU, FPU, and D$). There was never any rule that MCMs were not low latency or high bandwidth or must have a particular type of interconnect or packaging substrate.

Advances in technology and changing economics always shifts things around so maybe chiplets are viable for different things or will make sense for smaller production runs etc., but that doesn't make them fundamentally different that would make them not classified as an MCM like the article seems to suggest. It literally is just the same thing as it always was, multiple chips packaged up together with something that is not a standard PCB but is generally more specialized and higher performing.


Putting aside the terminology, a lot of people would have you believe that chiplets started in 2017 but they existed for 20-30 years before that.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: