Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well MemoryX compared to H100 HBM3 the key details are that MemoryX has lower latency, but also far lower bandwidth. However the memory on Cerebras is scales a lot more over NVidia. You need a cluster of H100's to create a model, as only way to scale the memory, Cerbras is more suited to that aspect, Nvidia do their scaling in tooling, with Cerbras doing theirs in design via there silicon approach.

That's my take on it all, not many apples to oranges comparisons to work from on these two system for even rolling down the same slope.



No way an offchip HBM has same or better bandwidth then onchip


> MemoryX has lower latency, but also far lower bandwidth




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: