Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They actually developed a discrete GPU starting around 2008, even demoed it a bit, but ended up canning the project

https://en.wikipedia.org/wiki/Larrabee_(microarchitecture)



IMO this was plagued from the start, including the HPC accelerators (Xeon Phi or whatever those were called). Intel put too much emphasis on x86 compatibility and even then couldn't keep up with Nvidia, neither in performance nor software support, except for a few edge cases. Even back then, if you want to build a compute coprocessor, it needs to be a thoroughbred optimized for throughput at various floating point precisions depending on target market. Nowadays the entry bar is set at being able to reuse your ALUs at multiple bit widths to boost both NN and classic HPC applications with the same silicon.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: