Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AMD has CPUs with built-in GPU with hundreds of ALUs and unified memory (GPU uses system memory). Cannot such CPU be used for ML tasks without purchasing expensive graphic cards? 64 Gb of ordinary RAM will be cheaper than 64 Gb of VRAM.

Currently such built-in GPUs seem to be optimized for games only.



VRAM speed is a major factor. Even ddr5 with 4 channels would be slow compared to current GDDRs.


That's correct, but if you are, for example, multiplying two 1024x1024 matrices (typical for ML loads) then you need to perform roughly 1B operations on 1M numbers. There is no need for loading lot of data. So we can do without fast memory in this case, can't we? Hopefully the cache and registers of GPU can handle this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: