Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Seriously why can’t Intel make a decent igpu? My understanding is that the M1 GPU is just “ok” and isn’t competing with AMD or nvidia?


because for the target audience the intel iGPU is "good enough".

For now: And for the cases where it isn't, Intel wants you go for an dedicated graphics cards. (There is a cost to stuffing more onto the same CPU package, chiplets do reduce that cost by quite a bit, but are a new technology you have to integrate into your design flow, tooling etc.)

In the future: Similar, but due to some shifts in their approach Intel might reach for chiplets. Similar there are dedicated Intel Graphics cards they want to sell to you.


The M-series GPUs are pretty great for what they are. I haven't tried a ton of games on it, but my 16" M1 Pro handles WoW about as well as a desktop with an RTX 2070 or so, and does it without the machine turning into a jet turbine. It's not destroying discrete video cards or anything, but it's universes beyond your typical integrated offering.


M-series GPUs feature a lot of ASIC and media encoder/decoders.

My M1 Air absolutely DESTROYS my RTX 3090 in h265/422 video editing, which is what my camera (Sony A7S iii) shoots.

My 3090 would shutter and play my 60fps footage at 11fps using 350W while my passively cooled M1 doesn't miss a frame.


this just means the M1 has a fixed function block for your codec or whatever else you're doing and the 3090 doesn't. or that your video software is crazy broken.

it obviously doesn't matter for your workload, ultimately the M1 is faster and that's all that matters for you. just highlighting that this isn't magic, just hardware matched to use case.


It also means that your hardware has a tendency to "strangely become supper slow" when video codecs change over time.

But then they don't change that often.

Your probably change your apple laptop more often.

And given that codecs are pretty much the only "high performance" use-case apples target audience has to a high degree putting special hardware in to make it go fast is the right way to go.


That must be new with the M1s, on the exact same project my 2018 Mac struggles (<5 FPS) while my 1080 rolls through ProRes 422 without any issue


I believe they are somewhere between equal to and a quantum leap over what Apple was shipping before depending on the task/machine.

But as we all know what Apple is/was shipping and the best you can get in a PC laptop are very different things.


Because of memory bandwidth limitations. There are EIGHT memory channels in M1 Max. To pull off that amount of bandwidth you would need to do a lot of crazy engineering and get not that much in return, because this approach doesn't scale well beyond M1 Max performance. You can't just add more and more memory channels. Even if they slap two chips together and use memory controllers of both, total bandwidth still would be less than in one 3090. And it will, most likely, loose to 3090 in all tasks other than video encoding. And you wouldn't be able to put a lot of these chips together, too much overhead. But on the other hand you can put 4x3090 into one PC. At the end it is just cheaper to use dedicated GPU past certain level of performance, especially if you don't make your own hardware AND software.


The 10th and 11th gen Intel CPUs each brought fairly significant boost to integrated graphics performance. I played Halo Reach and a bunch of other games on a 10th gen iGPU.

But, it had stagnated since the 4th gen. Intels 10th gen graphics were meant to go out with their first 10 nanometer chips, but that took them an extra few years.

All that said, yeah, I'd love to see even better integrated graphics. As others have mentioned, the RAM is a significant limitation.


Your understanding is correct, for the most part. Supposedly the new Xe graphics aren't all that bad, but I haven't used them enough to know for sure. AMD's still the king for iGPU performance, and Nvidia (as usual) takes the dGPU performance crown.


Yeah, Xe isn’t bad but it’s not anything close to what Apple is doing on the M1 Max or what AMD and Nvidia are doing on the discrete side for laptops.


Intel refuses to go beyond 128-bit memory which starves the iGPU.


Don’t they have their own new GPU that people expected them to announce around now but got “delayed”?


I don't expect Arc Alchemist to be that different than a GeForce 3070 or 3070 Ti. It's still going to be ~90W vs. a 60W M1 Max.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: