Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Even with UMA architectures where you eliminate the memory transport costs, it still costs a ton of time to actually launch a GPU kernel from the CPU.


Yeah, that's why I qualified with 'could'. Really depends on what facilities the hardware and driver provide. If the GPU is on the same die, perhaps the latency isn't great, but I really don't have the data on that. But I'd really like to see something like voxel deformable/destructible environments leveraging UMA on the Apple M. Seems like that something that would be groundbreaking, if only Apple really cared about gaming at all.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: