Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well not M-series chips specifically, but chips optimized for these kind of workloads (like the neural engine in M-series chips is).


IIRC The M series chip isn’t specifically optimized for ML workloads, the biggest gain it has is having unified video and cpu memory as transferring layers between the two is a big bottleneck on non Apple systems.

Real ML hardware (like the Nvidia H1000s) that can handle the kind of inference traffic you see in production get hot and use quite a bit of energy, especially when they run at full blast 24/7




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: