Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Doesn't sharing a virtual memory context with the GPU increase the cost of context switching? Also, which CPU core shares context with the GPU? Or are we talking about a fixed mapping (like the kernel)?


I'd assume they'd handle sharing memory between a CPU and a part of the GPU the exact same way they'd handle sharing between two CPU cores.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: