I think there were a good number of us who recognized that Intel was a bad fit for Apple. The lackadaisical progress of PowerPC simply forced the issue, and Intel was the only real option at the time.
Intel is like Mike Brady with his architectural designs that all look suspiciously like his own house.
They refused to compete with themselves and kept x86 as 32 bit so they could promote Itanic, and therefore lost the lead to AMD for years (it wasn't just 64 bit - actively REDUCING instructions per clock with Netburst was... well, legendary - just in a bad way).
It just so happened that the kick in the ass Intel got from AMD came a few years before Apple needed Intel, so Intel had finally started trying enough that they had a product line that would work for Apple.
But really, is Intel suitable for low power? Could anyone seriously imagine an x86-based phone? Their one-hit wonder is only barely keeping up with AMD and ARM when Intel throws hundreds of watts at their chips and turbo-clocks the heck out of them. Even though Ryzen has been showing up Intel for years, they've floundered so long that there was practically zero chance Apple could stay with them long term.
But even in 2005, Intel wasn't necessarily good - they just happened to be the least bad right then.
I think the move to Intel was critical for the revival of the Mac because it gave Windows users an off-ramp if they wanted to try out the Mac hardware without going all-in with OSX. Boot Camp made it safe for Windows users to switch, and many did.
It got me to buy my first Mac, at least (iMac). I figured if OSX didn’t work out I could just run Windows on it.
Maybe ironically, the new ARM Macs make me a little hesitant to buy a new MacBook because I’d be pretty much locked in to OSX (with all due respect to the Asahi folks who are doing great work - I fear Apple is going to pull the rug out from under them though.)
The people on the Asahi team explained this into great depth, but basically everything before macOS (bootloader and etc.) is capable of handling a different OS. However i would dispute their and OP's interpretation that Apple "supports" other OS. It's more of a "fitted for but not with", where Apple leave the technical possibility but don't do anything actively to help anyone trying to do so. The Asahi team has to reverse engineer pretty much everything from scratch with pretty much no documentation, which is an amazing feat and hats off to them. However claiming Apple supports what they're doing is a stretch, and there's no reason why Apple wouldn't just pull the rug under them - it's not like they've said it's OK (like with Bootcamp) to run Linux on Macs.
> "Okay, it's been over a year, and it's time to end the nonsense speculation."
> "I have heard from several Apple employees that:"
> "1. The boot method we use is for 3rd-party OSes, and Apple only use it to test that it works, because"
> "2. It is policy that it works."
> "Hacker News peanut gallery, you can drop the BS now. It's not an "assumption" that this stuff exists for 3rd-party OSes. It couldn't "be something internal Apple uses that could go away any minute". That is not how it works, it never was, and now I'm telling you it's official."
> "And this isn't even news because @XenoKovah (who invented and designed this entire Boot Policy 3rd party OS mechanism) already tweeted about this whole thing a long time ago, but apparently it needs to be restated."
This is conjecture with some wishful thinking. Apple providing the possibility of other OSes, and "inviting" Microsoft to port Windows, does not mean they want anyone to run any OS possible on Macbooks. And they don't do anything to help anyone write drivers for the Mac's numerous proprietary devices, everything has to be reverse engineered.
If Microsoft don't port Windows ARM to Macs, Apple might decommission the "core technologies". Even if they do, there's nothing stopping Apple from changing their mind down the lane, like they have already done on other topics, whatever the intentions of developers that developed them were.
Apple has implemented per OS security. Meaning that you could have complete chain-of-trust with one OS and have an untrusted second install of another OS. No PC has that, and it’s the core idea that allows Linux to be ported. I don’t see how such a useful technology for testing insecure versions of macOS would be removed. Apple backs few technologies, but they don’t often change their mind when they do. The fact of the matter is that Apple benefits from the access it gives Linux. It won’t remove it thoughtlessly.
Ultimately it leads to a discussion about the competition. When you buy an ARM based Surface, can you put Linux on it? Is Microsoft clear you can? Are they providing drivers?
The horse would be Apple putting out a public statement they want to have Linux on Macs. A few comments from developers that they made it on purpose so that other OSes could be booted on Macs is similar, but not even close.
Marcan, in case you are not aware, is one of the people behind Asahi. The only speculation here is yours. Parties directly involved have said otherwise. Hector Martin appears to have given up posting here exactly because of this kind of bullshit. Besides, even if Apple did come out and say it, you'd still trot this nonsense out.
Marcan is directly involved in the reverse engineering. He is not involved in Apple's decision making on what they want to allow on their precious platform, which they tend to really lock down in every possible way.
> Besides, even if Apple did come out and say it, you'd still trot this nonsense out.
No. Apple saying, unofficially, they welcome Microsoft, and Apple coming out and saying they love Linux and want it, would be different. That wouldn't be any guarantee that they won't change their mind or are just hypocrites, but it would still be more meaningful than "technically it's possible, there is no official anything (only an official welcome to MS Windows, but sure, Apple absolutely want Linux on Mac to be a thing".
Nothing. It's just bullshit naviety because apparently they haven't locked some aspect of the Mac yet, like they did with the iDevices, and so we are supposed to believe that this magnanimity from Apple is in "support" of other OS.
I owned the first gen white MacBook and the presence of that off ramp was a critical selling point for me. It turned out I never ended up using Boot Camp, and it wasn't until years later that I even installed Parallels. But knowing that I could if I needed to was key.
Between my big Linux desktop and M1 Macbook Air I'm not even sure what to do in Windows anymore. I don't need to run any Windows applications per say, and everything I do really need runs on MacOS. I still have a VirtualBox Win10 install on the Linux box, but honestly I'm not sure why, besides habit.
It also created the "Hackintosh" problem where people could take normal Intel PC hardware and put OSX on it, although Apple didn't try to do much against that except to sue a couple of companies providing dongles that supposedly made making Hackintoshes even easier than it was.
I think geeks overestimate how few Mac users actually care about Windows compatibility. Even then, as long as users had a browser, MS Office and Adobe products, Windows didn’t matter.
> But even in 2005, Intel wasn't necessarily good - they just happened to be the least bad right then.
I don't think it's a coincidence that Intel Macs coincided with the release of Core/Core 2 Duo cpus. At the time there was nothing close to them by any metric. Remember Intel enjoyed a generational lead in foundry tech for decades.
I only remembered it because recently I pulled an old android tablet out of a junk drawer, installed CPU-Z on it, and scratched my head for a moment about why it said “x86” as the architecture…
I was actually pleasantly surprised with the performance of a $100 x86 HP tablet (running Windows 8.1 of all things) I got back in 2014 or so. It booted and launched apps really fast thanks to the SSD, faster than my regular computer that had an HDD at the time. The Atom processor and single gig of RAM didn’t hamstring it too much for basic web browsing.
The strangest part of that setup was that it used a 32 bit UEFI and a 64 bit operating system, so I couldn’t use regular Linux images. But hey, you could configure the UEFI settings using only the touchscreen!
Naturally the battery life was terrible and it lasted fewer than 12 hours in sleep mode, so it spent little time outside the junk drawer too.
I had a similar experience with a Tesco (U.K. Supermarket) Hudl (their short lived tablet brand) bought in 2015. Not a bad experience and attractive price but awful battery life and got quite hot too!
I didn't forget about Atom. They give shit performance per watt, they have all sorts of issues (remember the Atom C2000 debacle?), and they're still quite expensive, relative to ARM.
Atom based phones exist, but they're not good at anything in particular.
The last few percentage points of performance take an insane amount of power. If you gave up 10% perf you'd probably halve power consumption.
I don't think there's any reason X86 has to use more power than ARM - it's simply not the focus of most implementations, however. As I understand it, most processors at this point are an interpreter on top of a bespoke core. Intel used to get quite a lot of praise for low power consumption back in 2012-2015 with Ivy Bridge and so on - rather coincidentally, that was also when they had a process advantage (rather like AMD and Apple today enjoy).
Yes and no. After the CISC vs RISC war was over I also though ISAs where implementation details.
But from what I’ve read, having different length instructions makes extracting parallelism way harder. That’s why Apple can make such crazy wide machines.
Oh yeah, isn't ARM fixed-size instructions and x86_64 is variable-size? So decoding x86_64 requires clever pipelining, whereas ARM is just "Every X bytes is an instruction" and you can parallelize easily.
I wonder if we'll see Intel or AMD try to make another somewhat-backwards-compatible ISA jump to keep up with ARM.
If I’m not mistaken, based on similar threads on HN decoding is never the bottleneck, so I would be hesitant to write x86 off for mobile devices. It probably does make transition to smaller scale harder, and that is where most efficiency wins happen.
We should never write x86 off when there are billions behind it and variable length instructions have their advantages as well, such as code density, which may come to play an important role again in the future.
But it is much easier to simply chop off a stream of instructions at every X bits than to evaluate a portion and decide what to do later and that difference get larger the wider you go.
> variable length instructions have their advantages as well, such as code density
Variable length instructions in general do have a code density advantage, but x86 is a particularly poor example. For historical reasons, it wastes short encodings with rarely used things like BCD adjustment instructions, and on 64 bits often requires an extra prefix byte. The RISC-V developers did a size comparison when designing their own compressed ISA, and the variable-length x86-64 used more space than the fixed-length 64-bit ARM; for 32 bits, ARM's variable-length Thumb2 was the winner (see page 14 of https://riscv.org/wp-content/uploads/2015/06/riscv-compresse...).
For many years, Intel had quite a process advantage over the competition. That of course helped them a lot with making low power processors vs. what AMD could achieve. And the non-x86 competition had basically stopped making processors in this domain. However, there was a reason that RISC designs were used in most low power applications like embedded and of course smart phones.
Yes, with the complexity and transistor budgets, the disadvantages of x86 can be somewhat glossed over, otherwise they would have vanished from the market long ago, but they add a certain overhead which cannot be ignored when looking at low power applications. The efforts the CPU needs to take until it can execute the commands is higher and x86 requires more optimizations done by the CPU than RISC designs. Which today also contain a translation layer, but a way simpler one tha x86, as the assembly instructions match modern CPU structures better.
It is probably no coincidence that Intel, which had to work around the issues of executing CISC code on a modern CPU chose the EPIC design for the Itanium. Which goes beyond RISC in putting compexity towards the code generation vs. on-cpu optimizations. Too bad it didn't work out - it might have, if AMD had not added 64bit extensions to x86. While there were certainly a lot of technical challenges which were never completely solved, the processors seemed to perform quite well when run with well optimized code. Perhaps they were just one or two process generations to early. While considered large for that time, their transistor count was small compared to a modern iPhone processor. I wonder how they would perform if just ported to 7nm (the latest CPUs were 32nm).
Even though Intel is known for putting tremendous work and effort in to their compilers, and therefore have compilers that put out excellent results (even on AMD), the compilers never delivered on the promises they made with Itanic.
If you'd like to see some first-hand observations about modern-ish compilers on Itanic, check out this person on Twitter who does lots of development on Itanic:
Intel is like Mike Brady with his architectural designs that all look suspiciously like his own house.
They refused to compete with themselves and kept x86 as 32 bit so they could promote Itanic, and therefore lost the lead to AMD for years (it wasn't just 64 bit - actively REDUCING instructions per clock with Netburst was... well, legendary - just in a bad way).
It just so happened that the kick in the ass Intel got from AMD came a few years before Apple needed Intel, so Intel had finally started trying enough that they had a product line that would work for Apple.
But really, is Intel suitable for low power? Could anyone seriously imagine an x86-based phone? Their one-hit wonder is only barely keeping up with AMD and ARM when Intel throws hundreds of watts at their chips and turbo-clocks the heck out of them. Even though Ryzen has been showing up Intel for years, they've floundered so long that there was practically zero chance Apple could stay with them long term.
But even in 2005, Intel wasn't necessarily good - they just happened to be the least bad right then.