Hacker Newsnew | past | comments | ask | show | jobs | submit | bjackman's commentslogin

The title is written specifically for UK readers, the actual subject is about the point of alignment leaving GB.

(Also for anyone who was confused by this, it's not about the poles, it's about the point where the bearings for all three norths are equivalent. So a "compass" would point in the same direction regardless of what kind of north it reported. Took me a moment to understand!)


To clarify, it was an alignment of True, Magnetic, and Grid North. It was occurring at a point that happened to occur in England and was travelling slowly North for the past few years and now exists over water.

Magnetic North is the direction a compass points in a particular location and moves with shifts in earths magnetic field as well as local anomalies.

True North is parallel to the axis of Earths rotation and moves as earth wobbles and sways like a slightly unbalanced spinning top.

Grid North is perpendicular to lines of Longitude which is "fixed" to a given geographic reference frame. For the UK that would be OGSB36, GPS uses WGS84, other countries may adopt different systems.

All this means that an alignment of all 3 norths can occur at multiple places on earth or none at all.


> The title is written specifically for UK readers

Is it? The `.au` TLD might suggest otherwise.


Not having thought about it too hard, if the three norths all align at -2 degrees west longitude, wouldn't they also be aligned at 178 degrees East longitude, somewhere near the east coast of New Zealand?

Do the analogous "three souths" also have an alignment, and is it precisely opposite the north poles?


> Not having thought about it too hard, if the three norths all align at -2 degrees west longitude, wouldn't they also be aligned at 178 degrees East longitude, somewhere near the east coast of New Zealand?

There is no required equivalent 'three souths' alignment because the Earth's magnetic field is not a pure dipole. Higher-moment variations can cause essentially arbitrary (but small) deviations of magnetic north/south from its dipole approximation.


True. There may be australians interested in other otherwise-obscure islands

Shouldn't this go without saying though? At some point someone has to review the code and they see a human name as the sender of the PR. If that person sees the work is bad, isn't it just completely unambiguous that the person whose name is on the PR is responsible for that? If someone responded "but this is AI generated" I would feel justified just responding "it doesn't matter" and passing the review back again.

And the rest (what's in the LLVM policy) should also fall out pretty naturally from this? If someone sends me code for review, and have the feeling they haven't read it themselves, I'll say "I'm not reviewing this and I won't review any more of your PRs unless you promise you reviewed them yourself first".

The fact that people seem to need to establish these things as an explicit policy is a little concerning to me. (Not that it's a bad idea at all. Just worried that there was a need).


You would think it's common sense but I've received PRs that the author didn't understand and when questioned told me that the AI knows more about X than they do so they trust its judgement.

A terrifying number of people seem to think that the damn thing is magic and infallible.


I opened the review and immediately ctrl-F'd "kernel". It said no upstream support so I closed the article.

I would never buy one of these things without upstream kernel support for the SoC and a sane bootloader. Even the Raspberry Pi is not great on this front TBH (kernel is mostly OK but the fucked up boot chain is a PITA, requires special distro support).


so what would you recommend for arm which has good proper support.

I feel like rasp pi has the most community support for everything so I had the intution that most things would just work out of the box on it or it would have the best arm support (I assumed the boot chain to be that as well)

what do you mean by the boot chain being painful to work with and can you provide me some examples perhaps?


I would recommend x86.

Ok that's mostly a joke, I'm just not up to date on what platforms exist these days that are done properly. Back in my day the Texas Instruments platforms (BeagleBoard) were decent. I think there are probably Rockchip-based SBCs today (Pine64 maybe?) that add up to something sensible but I dunno.

The thing with the boot chain is that e.g. the Pi has a proprietary bootloader that runs via the GPU. You cannot just load a normal distro onto the storage it needs to be a special build that matches the requirements of this proprietary bootloader. If your distro doesn't provide a build like that, well, hopefully you're OK with changing distro or ready to invest many hours getting your preferred distro working.

(Why only "mostly joking?" I recently repurposed an old ThinkPad to use as a home server and it's fucking great. Idles under 4W, dramatically more powerful than a Pi5, has proper UEFI and proper ACPI and all the drivers work properly, including the GPU. Would cost about the same on eBay as a Pi. Only remaining reason I can see for an Arm board is if you're specifically interested in Arm or have very specific space constraints).


At my last job, I found Toradex boards well-supported by Yocto. YMMV

Hm, if I may ask, what were they used for in your last job. To me they seem more entreprise focused than indie focused from a quick glance at their website.

I've had a similar case before but for a much more boring reason: a certain YouTube video somehow triggered a spike in power draw and caused my Google Pixel to reset.

Google's response after looking at the crash dumps: "WAI, your battery is degraded" (IIRC my phone was less than 3 years old).


Could be; after ~3 years, my Samsung Galaxy S7 would reset if I tried to make a call with battery below ~20%. I immediately knew it was the battery, because I still remember noticing it as a kid on Nokia 3410 - calling would sometimes drop the battery indicator by one bar, which would come back moments after call ended. That's how I learned about internal resistance and how battery capacity is measured :).

As for fixes in software, it's either treating it as WAI, or secretly throttling down the phone, like Apple did, for which they got accused of planned obsolescence. Neither choice is good (though actually informing the users would go a long way).


> main devs have moved on

How confident are you in this statement? I have no particular knowledge of Asahi. But I do know this narrative emerged about Rust-for-Linux after a couple of high-profile individuals quit.

In that case it was plainly bogus but this was only obvious if you were somewhat adjacent to the relevant community. So now I'm curious if it could be the same thing.

(Hopefully by now it's clear to everyone that R4L is a healthy project, since the official announcement that Rust is no longer "experimental" in the kernel tree).

I know Asahi is a much smaller project than R4L so it's naturally at higher risk of losing momentum.

I would really love Asahi to succeed. I recently bought a Framework and, while I am pretty happy with it in isolation... when I use my partner's M4 Macbook Air I just think... damn. The quality of this thing is head and shoulders above the rest of the field. And it doesn't even cost more than the competition. If you could run Linux on it, it would be completely insane to use anything else.


It's similarly bogus here. Early Asahi development tried to upstream as much as possible but ultimately still maintained a gigantic pile of downsteam patches, which wasn't a sustainable model.

Most of current development is focused on reducing that pile to zero to get things into a tractable state again. So things continue to be active, but the progress has become much less visible.


M2 to M3 was a complete architectural change that will require a lot of reverse engineering. As far as I know no one is working on this. The M1/M2 work was a labor of love of largely one dev that has since moved on.

The project is still active and working to upstream the work of these devs. But as far as I know, no NEW reverse engineering is being done. Ergo, it’s a dead end.

Would be happy to be proven wrong.


Someone should create a minimal, nearly-headless macOS distribution (similar to the old hackintosh distros) that bootstraps just enough to manage the machine's hardware, with no UI, and fires up the Apple virtualization framework and a Linux VM, which would own the whole display.

For optimal battery life you need to tweak the whole OS stack for the hardware. You need to make sure all the peripherals are set up right to go into the right idle states without causing user-visible latency on wake-up. (Note that often just one peripheral being out of tune here can mess up the whole system's power performance. Also the correct settings here depend on your software stack). You need to make sure that cpufreq and cpuidle governors work nicely with the particular foibles of your platform's CPUs. Ditto for the task scheduler. Then, ditto for a bunch of random userspace code (audio + rendering pipeline for example). The list goes on and on. This work gets done in Android and ChromeOS.

It's a crazy drug because there's a lot of significant downsides to Nix[OS]. E.g. it took me a solid half an hour of focus to upgrade my config to 25.11[1]. Also, like, no Secure Boot. And I've had to reverse engineer a lot of stuff.

But like you said, I can't ever imagine going back. Once you're over the learning curve (and... yeah, that learning curve) the upside are just so huge. Nothing compares, at all.

Part of me wonders if maybe one day a bootc-based framework will offer something like 20% of the benefits of NixOS with only 10% of the downsides. But other than that, we're totally stuck with Nix forever. (And once I had switched to bootc, I bet my next thought would be "I should find a way to generate this config from Nix"...).

[1] I have a very complex config so this may be an extreme case. On the other hand, everything about Nix is basically designed as an invitation to create an extremely complex config.


Geminii CLI has a specific "model is overloaded" error message which is distinct from "you're out of quota" so I suspect whatever tools they're using for this probably have something similar, and they're referring to that.

What, um... Are... Are people using samba to sync model weights between cluster nodes...?

Why not? SMB is no slouch. Microsoft has taken network storage performance very seriously for a long time now. Back in the day, Microsoft and others (NetApp, for instance,) worked hard to extend and optimize SMB and deliver efficient, high throughput file servers. I haven't kept up with the state of the art recently, but I know there have been long stretches where SMB consistently led the field in benchmark testing. It also doesn't hurt that Microsoft has a lot of pull with hardware manufacturers to see their native protocols remain tier 1 concerns at all times.

I think a lot of people have a hard time differentiating the underlying systems from what they _see_ and use it to bash MS products.

I heard that it was perhaps recently fixed, but copying many small files was multiple times faster to do via something like Total Commander vs the built in File Explorer (large files goes equally fast).

People seeing how slow Explorer was to copy would probably presume that it was a lower level Windows issue if they had a predisposed bias against Microsoft/Windows.

My theory about Explorers sluggishness is that they added visual feedback to the copying process at some point, and for whatever reason that visual feedback is synchronous/slow (perhaps capped at the framerate, thus 60 files a second), whilst TC does updating in the background and just renderers status periodically whilst the copying thread(s) can run at full speed of what the OS is capable of under the hood.


I dunno about Windows Explorer, but macOS’ finder seems to hash completed transfers over SMB (this must be something it can trigger the receiver to do in SMB itself, it doesn’t seem slow enough for the sender to be doing it on a remote file) and remove transferred files that don’t pass the check.

I could see that or other safety checks making one program slower than another that doesn’t bother. Or that sort of thing being an opportunity for a poor implementation that slows everything down a bunch.


A problem with Explorer, that it also shares with macOS Finder[1], is that they are very much legacy applications with features piled on top, and Explorer was never expected to be used for heavy I/O work and tends to do things the slower way possible, including doing things in ways that are optimized for "random first time user of windows 95 who will have maybe 50 files in a folder"

[1] Finder has parts that show continued use of code written for MacOS 9 :V


This blows my mind. $400B in annual revenue and they can't spare the few parts per million it would take to spruice up the foundation of their user experience.

This is speculation based on external observation, nothing internal other than rumours:

A big, increasing over last decade, chunk of that is fear that they will break the compatibility - or otherwise drop in shared knowledge. To the point that the more critical parts the less anyone wants to touch them (heard that ntfs.sys is essentially untouchable these days, for example).

And various rules that used to be sacrosanct are no longer followed, like the "main" branch of Windows source repository having to always build cleanly every night (fun thing - Microsoft is one of the origins of nightly builds as a practice)


It's probably a vicious cycle.

Less people are trusted to touch ntfs.sys due to lack of experience, thus they never gain it and that in turn means less work and in turn means even less people have proved themselves trustworthy enough to work on it.

Until nobody remains in the company that is trusted enough.


> to bash MS products.

Microsoft gives them a lot of ammo. While, as I said, Microsoft et al. have seen that SMB is indeed efficient, at the same time security has been neglected to the point of being farcical. You can see this in headlines as recent as last week: Microsoft is only now, in 2025, deprecating RC4 authentication, and this includes SMB.

So while one might leverage SMB for high throughput file service, it has always been the case that you can't take any exposure for granted: if it's not locked down by network policies and you don't regularly ensure all the knobs and switches are tweaked just so, it's an open wound, vulnerable to anything that can touch an endpoint or sniff a packet.


Agreed, but that used to be the difference between MS and Google.

MS would bend backwards to make sure those enterprise Windows 0.24 boxes will still be able to connect to networks because those run some 16bit drivers for CNC machines.

Meanwhile Google decided to kill a product the second whoever introduced it on stage walked off it.

Azure is a money-maked for MS, and wouldn't be so without those weird legacy enterprise deployments. The big question is if continuing to increase a posture about about security together with an "cloud" focus is actually in their best interest or if retaining those legacy enterprises would have been smarter.


Plenty of other workloads that benefit from high performance file access and with networks speeds and disk speeds getting higher whilst single-core perf has more or less plateaued in comparison, it's thus more and more important to support data-paths where the kernel switching won't become a bottleneck.

Dunno but I have used samba to load model weights from my NAS

I work in CPU security and it's the same with microarchitecture. You wanna know if a machine is vulnerable to a certain issue?

- The technical experts (including Intel engineers) will say something like "it affects Blizzard Creek and Windy Bluff models'

- Intel's technical docs will say "if CPUID leaf 0x3aa asserts bit 63 then the CPU is affected". (There is no database for this you can only find it out by actually booting one up).

- The spec sheet for the hardware calls it a "Xeon Osmiridium X36667-IA"

Absolutely none of these forms of naming have any way to correlate between them. They also have different names for the same shit depending on whether it's a consumer or server chip.

Meanwhile, AMD's part numbers contain a digit that increments with each year but is off-by-one with regard to the "Zen" brand version.

Usually I just ask the LLM and accept that it's wrong 20% of the time.


> - Intel's technical docs will say "if CPUID leaf 0x3aa asserts bit 63 then the CPU is affected". (There is no database for this you can only find it out by actually booting one up).

I’m doing some OS work at the moment and running into this. I’m really surprised there’s no caniuse.com for cpu features. I’m planning on requiring support for all the features that have been in every cpu that shipped in the last 10+ years. But it’s basically impossible to figure that out. Especially across Intel and amd. Can I assume apic? Iommu stuff? Is acpi 2 actually available on all CPUs or do I need to have to have support for the old version as well? It’s very annoying.


Even more fun is that some of those (IOMMU and ACPI version) depend on motherboard/firmware support. Inevitably there is some bargain-bin board for each processor generation that doesn’t support anything that isn’t literally required for the CPU/chipset to POST. For userspace CPU features the new x86_64-v3/v4 profiles that Clang/LLVM support are good Schelling points, but they don’t cover e.g. page table features.

Windows has specific platform requirements they spell out for each version - those are generally your best bet on x86. ARM devs have it way worse so I guess we shouldn’t complain.


At least on ARM you can get trms or data sheets that cover all of the features of a specific processor and also the markings on the chip that differentiate it from models within the same family.

I’m pretty sure the number of people at Intel who can tell you offhandedly the answer to your questions about only Intel processors is approximately zero give or take couple. Digging would be required.

If you were willing to accept only the relatively high power variants it’d be easier.


I'd be happy to support the low power variants as well, but without spending a bunch of money, I have no idea what features they have and what they're missing. Its very annoying.

For anyone not familiar with caniuse, its indispensable for modern web development. Say you want to put images on a web page. You've heard of webp. Can you use it?

https://caniuse.com/webp

At a glance you see the answer. 95% of global web users use a web browser with webp support. Its available in all the major browsers, and has been for several years. You can query basically any browser feature like this to see its support status.


That initial percentage is a little misleading. It includes everything that caniuse isn't sure about. Really it should be something like 97.5±2.5 but the issue's been stalled for years.

Even the absolute most basic features that have been well supported for 30 years, like the HTML "div" element, cap out at 96%. Change the drop-down from "all users" to "all tracked" and you'll get a more representative answer.


> I’m planning on requiring support for all the features that have been in every cpu that shipped in the last 10+ years. But it’s basically impossible to figure that out.

The easiest thing would probably to specify the need for "x86-64-v3":

* https://en.wikipedia.org/wiki/X86-64#Microarchitecture_level...

RHEL9 mandated "x86-64-v2", and v3 is being considered for RHEL10:

> The x86-64-v3 level has been implemented first in Intel’s Haswell CPU generation (2013). AMD implemented x86-64-v3 support with the Excavator microarchitecture (2015). Intel’s Atom product line added x86-64-v3 support with the Gracemont microarchitecture (2021), but Intel has continued to release Atom CPUs without AVX support after that (Parker Ridge in 2022, and an Elkhart Lake variant in 2023).

* https://developers.redhat.com/articles/2024/01/02/exploring-...


> The easiest thing would probably to specify the need for "x86-64-v3"

AFAIK, that only specifies the user-space-visible instruction set extensions, not the presence and version of operating-system-level features like APIC or IOMMU.


RHEL10 has been released and does require x86-64-v3.

https://access.redhat.com/solutions/7066628


Even defining "shipped in the last 10 years" is tricky - because does that mean released or final shipment from the factory or ?

You're often better picking a subset of CPU features you want to use and then sampling to see if it excludes something important.


> then sampling to see if it excludes something important.

But how? That’s the question.


This is unfortunately the same for GPUs. The graphics APIs expose capability bits or extensions indicating what features the hardware and driver supports, but the graphics vendors don't always publish documentation on what generations of their hardware support various features, so your program is expected to dynamically adapt to arbitrary combinations of features. This is no longer as bad as it used to be due to consolidation in the graphics market, but people still have to build ad-hoc crowd sourced databases of GPU caps bits.

It's also not monotonic, on both CPU and GPU sides features can go away later because either due to a hardware bug or the vendor lost interest in supporting it.


CPU Monkey had some neat info like whether a CPU had AV1 hwdec/hwenc, then they redesigned their site and that info is gone for some reason. I think it was a year or less between finding their site and them ruining it.

https://web.archive.org/web/20250616224354/https://www.cpu-m...

https://www.cpu-monkey.com/en/cpu-amd_ryzen_7_pro_8840u

A nice reminder to stick any page you find useful in the wayback machine and/or save a local copy.


I feel like it's a cultural thing with the designers. Ceragon were the exact same when I used to do microwave links. Happy to provide demo kit, happy to provide sales support, happy to actually come up and go through their product range.

But if you want any deep and complex technical info out of them, like oh maybe how to configure it to fit UK/EU regulatory domain RF rules? Haha no chance.

We ended up hiring a guy fluent in Hebrew just to talk to their support guys.

Super nice kit, but I guess no-one was prepared to pay for an interface layer between the developers and the outside world.


I have three Ubuntu servers and the naming pisses me off so much. Why can't they just stick with their YY.MM. naming scheme everywhere. Instead, they mostly use code names and I never know what codename I am currently using and what is the latest code name. When I have to upgrade or find a specific Python ppa for whatever OS I am running, I need to research 30 minutes to correlate all these dumb codenames to the actual version numbers.

Same with Intel.

STOP USING CODENAMES. USE NUMBERS!


As an Apple user, the macOS code names stopped being cute once they ran out of felines, and now I can't remember which of Sonoma or Sequoia was first.

Android have done this right: when they used codenames they did them in alphabetical order, and at version 10 they just stopped being clever and went to numbers.


Ubuntu has alphabetical order too, but that's only useful if you want to know if "noble" is newer than "jammy", and useless if you know you have 24.04 but have no idea what its codename is and

Android also sucks for developers because they have the public facing numbers and then API versions which are different and not always scaling linearly (sometimes there is something like "Android 8.1" or "Android 12L" with a newer API), and as developers you always deal with the API numbers (you specify minimum API version, not the minimum "OS version" your code runs in your code), and have to map that back to version numbers the users and managers know to present it to them when you're upping the minimum requirements...


> Ubuntu has alphabetical order too, but that's only useful if you want to know if "noble" is newer than "jammy"

Well, it was until they looped.

Xenial Xerus is older than Questing Quokka. As someone out of the Ubuntu loop for a very long time, I wouldn't know what either of those mean anyway and would have guessed the age wrong.


Yes, I agree, codenames are stupid, they are not funny or clever.

I want a version number that I can compare to other versions, to be able to easily see which one is newer or older, to know what I can or should install.

I don't want to figure out and remember your product's clever nicknames.


They can't. They used to, until they tried to patent 586...

Trademark.

Protip, if you have access to the computer: `lsb_release -a` should list both release and codename. This command is not specific to Ubuntu.

Finding the latest release and codename is indeed a research task. I use Wikipedia[1] for that, but I feel like this should be more readily available from the system itself. Perhaps it is, and I just don't know how?

[1] https://en.wikipedia.org/wiki/Ubuntu#Releases


> Protip, if you have access to the computer: `lsb_release -a` should list both release and codename. This command is not specific to Ubuntu.

I typically prefer

  cat /etc/os-release
which seems to be a little more portable / likely to work out of the box on many distros.

That's only if the distro is recent enough; sooner or later, you'll encounter a box running a distro version from before /etc/os-release became the standard, and you'll have to look for the older distro-specific files like /etc/debian_version.

> you'll encounter a box running a distro version from before /etc/os-release became the standard

Do those boxes really still exist? Debian, which isn't really known to be the pinacle of bleeding edge, has had /etc/os-release since Debian 7, released in May 2013. RHEL 7, the oldest Red Hat still in extended support, also has it.


> the oldest Red Hat still in extended support, also has it.

You would be alarmed to know how long the long tail is. Are you going to run into many pre-RHEL 7 boxes? No. Depending on where you are in the industry, are you likely to run into some ancient RHEL boxes, perhaps even actual Red Hat (not Enterprise) Linux? Yeah, it happens.


> Do those boxes really still exist?

Yes, they do. You'll be surprised by how many places use out-of-support operating systems and software (which were well within their support windows when installed, they have just never been upgraded). After all, if it's working, why change it? (We have a saying here in Brazil "em time que está ganhando não se mexe", which can be loosely translated as "don't change a (soccer) team which is winning".)


Try cat /etc/os-release. The codenames are probably there. I know they are for Debian.

Thank you! I was just about to kvetch about how difficult it was to map (eg) "Trixie" == "13" because /etc/debian_version didn't have it... I always ended up having to search the internet for it which seemed especially dumb for Debian!

Same problem I have with Debian.

At least Fedora just uses a version number!


I like to think that Buster, Bullseye, and Bookworm was a ploy to make people more dependent on the version number.

I work with Debian daily and I still couldn't tell you what order those go in. but Debian 12, Debian 13, etc.. is perfectly easy to remember and search for.

Debian is trying hard to switch to numbers. It's the user base that is resisting the change.

Maybe they should stop synlinking the new versions after 14, because AFAIK, they already tried everything else.


Yeah if they just stopped using a release name that'd probably do it, although communities can be surprisingly stubborn on some things.

> AMD's part numbers contain a digit that increments with each year

Aha, but which digit? Sure, that's easy for server, HEDT and desktop (it's the first one) but if you look at their line of laptop chips then it all breaks down.


Lol no, for servers (Epyc) it is the last digit. Why? Who knows, to make it more confusing I guess.

I admit it took me until the 4th gen Epyc to realize this. I laughed out loud at myself/the numbering scheme.

Well yeah Epyc is little endian

Oh, the Xeons are with the vX vs vY nonsense, where the same number but a different version is an entirely different CPU (like the 2620 v1 and v2 are different microarchitecture generations and core counts). But, not to leave AMD out, they do things like the Ryzen 7000 series which are Zen 4 except for the models that are Zen 2 (!). (yes if you read the middle digits there's some indication but that's not that helpful for normal customers).

That's been the case with hardware at several companies I was at.

I was convinced that the process was encouraged by folks who used it as a sort of weird gatekeeping by folks who only used the magic code names.

Even better I worked at a place where they swapped code names between two products at one time... it wasn't without any reason, but it mean that a lot of product documentation suddenly conflicted.

I eventually only refereed to exact part numbers and model numbers and refused to play the code name game. This turned into an amusing situation where some managers who only used code names were suddenly silent as they clearly didn't know the product / part to code name convention.


You can correlate microarchitecture to product SKUs using the Intel site that the article links. AMD has a similar site with similar functionality (except that AFAIK it won't let you easily get a list of products with a given uarch). These both have their faults, but I'd certainly pick them over an LLM.

But you're correct that for anything buried in the guts of CPUID, your life is pain. And Intel's product branding has been a disaster for years.


> You can correlate microarchitecture to product SKUs using the Intel site that the article links.

Intel removed most things older than SB late 2024 (a few xeons remain but afaik anything consumer was wiped with no warning). It’s virtually guaranteed that Intel will remove more stuff in the future.


Also technically the code names are only for unreleased products so on ark it’ll say “products formerly Ice Lake” but the intel will continue to calm them Ice Lake.

These have been my go-to for a while now:

https://en.wikipedia.org/wiki/List_of_Intel_Core_processors

https://en.wikipedia.org/wiki/List_of_Intel_Xeon_processors

It doesn't have the CPUID but it's a pretty good mapping of model numbers to code names and on top of that has the rest of the specs.


I hear you.

Coincidentally, if anyone knows how to figure out which Intel CPUs actually support 5-level paging / the CPUID flag known as la57, please tell me.


> Absolutely none of these forms of naming have any way to correlate between them.

I've found that -- as of a ~decade ago, at least, ark.intel.com had a really good way to cross-reference among codenames / SKUs / part numbers / feature set/specs. I've never seen errata there but they might be. Also, I haven't used it in a long time so it could've gotten worse.


Intel do have a website where you can look up SKUs. If you wait long enough and exploit certain bugs in the JS you can get it to give you a bunch of CSV files.

Now the only issue you have is that there is no consistent schema between those files so it's not really any use.


> Meanwhile, AMD's part numbers contain a digit that increments with each year but is off-by-one with regard to the "Zen" brand version.

Under https://en.wikipedia.org/wiki/Ryzen#Mobile_6 Ryzen 7000 series you could get zen2, zen3, zen3+, zen4


I've also found the same thing a decade ago, apparently lots of features(e.g. specific instruction, igpu) are broadly advertised as belonging to specific arch, but pentium/celeron(or for premium stuff non-xeon) models often lack them entirely and the only way to detect is lscpu/feature bits/digging in UEFI settings.

>"it affects Blizzard Creek and Windy Bluff models'

"Products formerly Blizzard Creek"

WTF does that even mean?


Intel doesn't like to officially use codenames for products once they have shipped, but those codenames are used widely to delineate different families (even by them!), so they compromise with the awkward "products formerly x" wording. Have done for a long time.

I wouldn't mind them coming up with better codenames anyway. "Some lower-end SKUs branded as Raptor Lake are based on Alder Lake, with Golden Cove P-cores and Alder Lake-equivalent cache and memory configurations." How can anyone memorize this endless churn of lakes, coves and monts? They could've at least named them in the alphabetical order.

AMD does this subterfuge as well. Put Zen 2 cores from 2019 (!) in some new chip packaging and sell it as Ryzen 10 / 100. Suddenly these chips seem as fresh as Zen 5.

It's fraud, plain and simple.


The entire point of code names is that you can delay coming up with a marketing name. If the end user sees the code name then what is even the point? Using the code name in external communication is really really dumb. They need to decide if it should be printed on the box or if it's only for internal use, and don't do anything in between.

The problem, especially at Intel, but also at AMD, is that they sell very different CPUs under approximately identical names.

In a very distant past, AMD was publishing what the CPUID instruction will return for each CPU model that they were selling. Now this is no longer true, so you have to either buy a CPU to discover what it really is, or to hope that a charitable soul who has bought such a CPU will publish on the Internet the result.

Without having access to the CPUID information, the next best is to find on the Intel Ark site, whether the CPU model you see listed by some shop is described for instance as belonging to 'Products formerly Arrow Lake S", as that will at least identify the product microarchitecture.

This is still not foolproof, because the products listed as "formerly ..." may still be packaged in several variants and they may have various features disabled during production, so you can still have surprises when you test them for the first time.


So they should put it on the box. In small font on the back if necessary, but make it an official part of the spec sheet - don't pretend it's irrelevant.

Product lines are in design and development for years, two years is lightning fast, code names can be found for things five or more years before they were released, so everyone who works with them knows them better (much better) than the retail names.

It means Intel M14 and M15 base designs. Except they don't use numbers.

AMD distributed a decoding wheel for their laptop chip naming this year.

Do you just have banks of old CPUs from every generation to test against?

Nope. Recently I had to use my company card to buy an ancient mini-PC from eBay just so I could get access to a certain Skylake model

  - sSpec S0ABC                   = "Blizzard Creek" Xeon type 8 version 5 grade 6 getConfig(HT=off, NX=off, ECC=on, VT-x=off, VT-d=on)=4X Stepping B0  
  - "Blizzard Creek" Xeon type 8 -> V3 of Socket FCBGA12345 -> chipset "Pleiades Mounds"   
  - CPUID leaf 0x3aa              = Model specific feature set checks for "Blizzard Creek" and "Windy Bluff(aka Blizzard Creek V2)"  
  - asserts bit 63                = that buggy VT-d circuit is not off  
  - "Xeon Osmiridium X36667-IA"   = marketing name to confuse specifically you(but also IA-36-667 = (S0ABC|S9DFG|S9QWE|QA45P))  
disclaimer: above is all made up and I don't work at any of relevant companies

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: