Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

When I think about how long chips like the 6502 have still been in active use (almost 50 years now), it is hard to conceive of a world where there isn't a significant presence of x86 activity for the rest of my life.

The majority of 'the market' may go elsewhere, but for a gazillion reasons, x86 will not be disappearing for quite a while. At this point it would honestly surprise me if we didn't at least have high quality emulation available until the end of the human race as we know it.

Sure, we've probably lost most of the software ever written on it, but a whole lot of interesting artifacts from a key transition point for our species still remain locked up in this architecture.



Given the new 128-core AMD server parts are on-par with ARM in terms of power efficiency and capable of more raw compute, it may even grow a bit.

I think there's lots of room for ARM, Risc-V and x86_64 in the future. There's reasons to support any of them over the others. And given how well developer tool are getting support across them all, it may actually grow a lot. I think the down side is a lot of the secondary compute accelerators, such as what intel is pushing and what the various ARM and Risc-V implementations include in practice.

The further from a common core you get, the more complex porting or cross platform tooling gets. Even if for big gains in some parts. For example, working on personal/hobby projects in ARM boards that aren't RPi is sometimes an exercise in frustration, with no mainline support at all.


> I think the down side is a lot of the secondary compute accelerators, such as what intel is pushing and what the various ARM and Risc-V implementations include in practice.

I’m curious why this is a downside. The current trends in computing is that we’re long past the point of single threaded compute. The first step of that was multi processor and multi core and that’ll continue with more and more dedicated and specialized computing sub-processors. Energy prices are more and more becoming a major determining factor as is the area needed for cooling. By having more separated subprocessors you get both efficiency and easier ability to cool the parts.


The specialized sub-processors are implemented differently, not always available, and not available 1:1 to compute nodes. If you're offering, for example, cloud compute... you can offer 4 cores pretty easily... but if there are 2 specialized sub-processors, then do you offer them, does this queue across all users/clients on that system or do you just block and pretend it doesn't exist. For Zen 4c, they're all general compute.

This means, practically speaking, you're only going to really have 1 host on a server that wants/needs these specialized sub-processors. Which means more space/heat/power for a single user/service. It's probably fine for some things, but far from ideal. This also doesn't get into software optimization and alternative paths where not available.

This gets far worse in the ARM space, as it seems every SOC does something different, which means it's often broken, or unusable if you're using a mainline OS/kernel and even then most software won't be optimized for it. At best, you can maybe playback 4k compressed video. At worst, you can't at all. Just speaking to the most common instance in that space, which is video compression, which is often built around closed drivers that mainline OSes (Ubuntu Server, Debian, etc) don't have in the box, and the vendor only supports a single version of a distro fork with no upgrade path.


I'm not a hardware designer, so quite possibly wrong.

My understanding for the push for energy efficiency is not for cost reasons, but for performance and stability. At a certain power level, the chips just can't dissipate enough energy, especially on smaller nodes.

If amd could double the performance at double the power, they would.

Cost is obviously a marketing/client-value thing too.


Plausibly we're headed for a world where feature size decreases stall out but manufacturing improvements continue to lower the price of transistors over time. In a world like that throwing in a few x86 cores even if the dominant ISA shifts might be worth it from a backwards compatibility standpoint even if other ISA become dominant.

There's lots of complications to address there (strict x86 memory ordering versus loose ARM ordering, for instance) but I expect they're solvable.


IIRC, AMD's model for chiplets could already combine x86 and ARM as an example, not sure if any such beasts are in existence, but should be possible.


Yeah, I'm actually thinking something like a socket integrator buying an 8 core chiplet from AMD or Intel to connect via CXL or whatever to the ARM/RISC-V/GPU/Tensor cores in the other chiplets from different manufacturers.


A thing's future longevity can sometimes be predicted by how long it's already been around.



But not humans, regrettably.


It used to somewhat work for humans, until about the mid-19th century (in Europe, other places vary). A newborn was unlikely to reach an age of 5, a 5 year old had better chances but might still die from illness in childhood, but someone who reached an age of 20 could be expected to reach an age of 40, and if you reached 40 you might well make it to 80.


99 year old humans are considerably more likely to live to be 100 than 50 year old humans.


"future longevity" in this case means how many years are left, not how many years total.


As we learned from Terminator 2, the machines that eventually rise up to eradicate humanity will still be running some kind of 6502 derivative.


>chips like the 6502 have still been in active use (almost 50 years now)

Also 8051 cores can still be found in modern products


8051s are just now getting phased out as control cores of IP blocks, which is pretty wild.

The 32-bit ARM and RISC-V cores are small enough and easier to program.


ARM vs 8051 is a licensing fee thing usually. 8051 license terms are extremely generous compared to ARM as far as I know. RISC is a being explored more readily, but toolchains are not nearly as robust as arm/8051.


Just a few years back, I checked out datasheet for an IC encountered in an USB card reader (newly bought).

Turns out a 8051 core was included (iirc clocked @ ~30 MHz, to control jobs like light busy LED on card read/write ops, some bus arbitration / priority settings, power management or the like).

Made total sense to encounter an ancient, 'fast', tiny 8-bit core there, even though unexpected.

There must be (and will be) an endless list of products including tiny CPU cores like that (eg., RFID tags come to mind).


Ah yes, the three essential building blocks of electronics: NAND, NOR, 8051.


.... until the end of the human race as we know it.

I think this is the critical part. If humanity (as we know it) only lasts 10 more years, then sure x86 will still be around somewhere.

If we last a million years, it will probably be gone long before that. Even in a thousand years it's probably gone a long time ago.


I'm reminded of the Vernor Vinge novel where a character hacking some fleet's automation hundreds of thousands of years in the future casually mentions that the tech stack is so old that the system time stamp is still the Unix Epoch

“And via a million million circuitous threads of inheritance, many of the oldest programs still ran in the bowels of the Qeng Ho system. Take the Traders’ method of timekeeping. The frame corrections were incredibly complex—and down at the very bottom of it was a little program that ran a counter. Second by second, the Qeng Ho counted from the instant that a human had first set foot on Old Earth’s moon. But if you looked at it still more closely…the starting instant was actually about fifteen million seconds later, the 0-second of one of Humankind’s first computer operating systems.”


Part of the "Fire Upon the Deep" series of novels (that particular reference is from the "A Deepness in the Sky").


I mean a thousand years are very hard to imagine, and how many changes there are.

But the cynical operator in my head could just laugh. We as a tech community are still running MS-DOS productively. Just wait, someone will run the door controls of our first space ships on some x86 chip. Or some similar system you just need, but that never gets time to be updated properly. Just wait, the new cruise liner spaceship of the milky way republic is going to run some x86 emulator for their window control.


Maybe this is because we're mostly a Apple computers household, but a few months ago I realized the only x86 device my household own is our NAS (and frankly it's the worse device we own). Was pretty wilded out when I figured that one out.


Are 6502 chips still used? What's the application?


Skynet is due to start producing their T-800 line of Terminators in 2026, which will use a 6502-derived CPU.


Tamagochis are running on 6502 compatible chips. Which makes me think it is used in other toys, too.


What are the top embedded usage chips and what is their typical niche?


I wouldn't know. I just watched a talk a couple of years back about reverse engineering tamagochis, and they turned out to run on some 6502 compatible chip made by a company that - IIRC - sells to toy makers mostly.


I was surprised to learn a few months ago that 6502's are still in production, so there must be some use for it. Perhaps replacement parts for industrial equipment from the 70s-90s?


> Are 6502 chips still used?

Largely, no. I'm sure there's a few out there, but it's unusual.

Embedded 8051 cores, on the other hand... we're probably never going to fully escape those.


8051 was once big in industrial applications like PLCs.

Somehow I doubt much has changed. In such applications, reliability + maturity of hw/sw ecosystem is much more important than raw speed or design innovations a competing architecture might bring to the table.

So 8051 based parts may see the occasional process shrink, addition of new peripherals, or new IC packages, I/O pin counts, operating voltage etc.

But I'd doubt any designer worth their salt would dare touch that core architecture unless their life depended on doing so. :-)


8051 is quite popular in new designs as a low-cost embedded controller. You don't see it as often as a discrete component [1]; it's more frequently embedded in more complex devices, like as controllers for USB peripherals or even for startup sequencing in larger parts.

[1]: Although that is a thing too; there's a number of manufacturers like Silicon Labs with extensive lines of modern 8051-based mcirocontrollers.


I don't know anything about physical 6502's but they've been embedded in FPGAs where you need a small MCU in a larger design. Same with Intel 8051's.


x86 is now permanently a part of humanity. 1000 years from now, when we've transcended our physical bodies and exist only as streams of sentient data and energy traveling between the stars, I 100% guarantee x86 will be detectable somewhere.


Stuff like 6502s and Z80s are a bit like little single-cylinder engines - the world will move onto all sorts of interesting new places, but something somewhere will always be powered by a wee Briggs & Stratton that starts first pull of the string, and we'll be glad of it.


We'll always need tiny cores, but it's worth noting that RISC-V can squeeze down to pretty small sizes and is so much nicer to use. Notably you can go smaller than a 68k or 8086.


I suspect 32-bit RISC-V cores will become the minimum unit of processor for new designs. Not a meaningful cost increase over say a 6502 (or ARM) to build, but the convenience of having mainstream compiler support and that kind of thing does make a difference in the cost of building a product.


Less transistors does mean less power too. I could see applications where that matters.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: