Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In this case GPUs are actually the perfect argument in favor of the article. GPUs only speak in objects, never linear addresses. You allocate a type of object and that allocation just now is that type. eg, a texture is always a texture, never a vertex buffer. Which is a different object altogether. And you never work with addresses. You can point objects at other objects. You can point at an offset from an object. But you can never do arbitrary pointer manipulation, and you never have any idea if the address space is linear or not.


> GPUs only speak in objects, never linear addresses.

Shared Virtual Memory is literally "GPU sees X region the same as the CPU sees it". And is implemented on all desktop GPUs today: https://www.intel.com/content/www/us/en/developer/articles/t...

By making pointers treated the same on CPU or GPU (by allowing their 64-address spaces to be identical by sharing the same memory regions + using PCIe to keep those memory regions in sync), you can perform high-speed CPU / GPU communications of even linked data structures (linked lists, trees, graphs).

GPUs utilize many linked data-structures. Oct-trees accelerate bounds testing, BVH trees help raytracing. Etc. etc. The GPU addresses must be linear because they're synchronized to the CPU's memory space.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: