Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What’s baffling about all this is both AMD and Intel have competing offerings and those offerings see next to no traction in spite of being much more attractive from the cost standpoint. I understand why they aren’t taking off on the training side: fragmentation is very counterproductive there. But why not deploy them in large quantities for inference at least? The effort of porting transformer based models is trivial for both vendors, and the performance is very competitive


CUDA is their API and is proprietary. It's very fast and it's a significant competitive advantage.

https://analyticsindiamag.com/is-cuda-nvidias-competitive-mo...


It really doesn’t matter if you have cuda or not if you’re going to run inference at scale. As I said above (speaking from experience), porting models for inference is not a technically difficult problem. Indeed with both Intel Gaudi or AMD MI series of accelerators a lot of the popular architectures and their derivatives are supported either out of the box or with minimal tweaks.


AMD's only offering that matters, MI300x, is literally a couple months old. Give it time.


MI250 is also quite potent. Certainly sufficient for a good chunk of inference use cases


You are totally right, but few are buying them up at this point. The future is all MI300x.


I think the problem the other players have is they don't actually want to compete head-on with Nvidia.

AMD actually commissioned a drop-in CUDA emulator and we found out 'cause they stopped financing it and they open-sourced it as parts of the contract.

I would speculate that no one actually wants a "clone wars" situation since it would commodify the GPU and reduce everyone's profit rate.


The companies buying billions of dollars of Nvidia GPUs definitely do want to commodify the GPU


The question is what is the performance per watt of AMD's and Intel's products. My guess is both have significantly worse performance per watt. Energy and cooling are huge data center expenses and paying less for a product which requires more energy and cool is not a good idea because it costs more.


Nope. Not for LLM workloads at least. They’re competitive across the board.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: