because for the target audience the intel iGPU is "good enough".
For now: And for the cases where it isn't, Intel wants you go for an dedicated graphics cards. (There is a cost to stuffing more onto the same CPU package, chiplets do reduce that cost by quite a bit, but are a new technology you have to integrate into your design flow, tooling etc.)
In the future: Similar, but due to some shifts in their approach Intel might reach for chiplets. Similar there are dedicated Intel Graphics cards they want to sell to you.
The M-series GPUs are pretty great for what they are. I haven't tried a ton of games on it, but my 16" M1 Pro handles WoW about as well as a desktop with an RTX 2070 or so, and does it without the machine turning into a jet turbine. It's not destroying discrete video cards or anything, but it's universes beyond your typical integrated offering.
this just means the M1 has a fixed function block for your codec or whatever else you're doing and the 3090 doesn't. or that your video software is crazy broken.
it obviously doesn't matter for your workload, ultimately the M1 is faster and that's all that matters for you. just highlighting that this isn't magic, just hardware matched to use case.
It also means that your hardware has a tendency to "strangely become supper slow" when video codecs change over time.
But then they don't change that often.
Your probably change your apple laptop more often.
And given that codecs are pretty much the only "high performance" use-case apples target audience has to a high degree putting special hardware in to make it go fast is the right way to go.
Because of memory bandwidth limitations. There are EIGHT memory channels in M1 Max. To pull off that amount of bandwidth you would need to do a lot of crazy engineering and get not that much in return, because this approach doesn't scale well beyond M1 Max performance. You can't just add more and more memory channels. Even if they slap two chips together and use memory controllers of both, total bandwidth still would be less than in one 3090. And it will, most likely, loose to 3090 in all tasks other than video encoding. And you wouldn't be able to put a lot of these chips together, too much overhead. But on the other hand you can put 4x3090 into one PC. At the end it is just cheaper to use dedicated GPU past certain level of performance, especially if you don't make your own hardware AND software.
The 10th and 11th gen Intel CPUs each brought fairly significant boost to integrated graphics performance. I played Halo Reach and a bunch of other games on a 10th gen iGPU.
But, it had stagnated since the 4th gen. Intels 10th gen graphics were meant to go out with their first 10 nanometer chips, but that took them an extra few years.
All that said, yeah, I'd love to see even better integrated graphics. As others have mentioned, the RAM is a significant limitation.
Your understanding is correct, for the most part. Supposedly the new Xe graphics aren't all that bad, but I haven't used them enough to know for sure. AMD's still the king for iGPU performance, and Nvidia (as usual) takes the dGPU performance crown.