Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Given how every CPU vendor seems to push for some kind of NPU, local running models will probably be far more common in next 5 years. And convincing everyone to pay subscription for very minimal improvements in functionality gonna be hard.


The NPUs integrated into CPU SoCs are very small compared to even integrated GPUs, much less discrete or datacenter GPUs.

NPUs seem to be targeted towards running tiny ML models at very low power, not running large AI models.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: