I think compilers should be smart enough to substitute group-shared atomics with horizontal ops. If it's not already doing it, it should be!
But anyways, Histogram Pyramids is a more efficient algorithm for implementing parallel scan anyways. It essentially builds a series of 3D buffers, each having half the dimension of the previous level, and each value containing the sum of the amounts in each underlying cells, with the top cube being just a single value, the total amount of cells.
Then instead of doing the second pass where you figure out what index thread is supposed to write to, and writing it to a buffer, you just simply drill down into said cubes and figure out the index at the invocation of the meshing part by looking at your thread index (lets say 1526), and looking at the 8 smaller cubes (okay, cube 1 has 516 entries, so 1100 to go, cube 2 has 1031 entries, so 69 to go, cube 3 has 225 entries, so we go to cube 3), and recursively repeat until you find the index. Since all threads in a group tend go into the same cubes, all threads tend to read the same bits of memory until getting down to the bottom levels, making it very GPU cache friendly (divergent reads kill GPGPU perf).
Forgive me if I got the technical terminology wrong, I haven't actually worked on GPGPU in more than a decade, but it's fun to not that something that I did cca 2011 as an undergrad is suddenly relevant again (in which I implemented HistoPyramids from a 2007ish paper, and Marching Cubes, an 1980s algorithm). Everything old is new again.
But anyways, Histogram Pyramids is a more efficient algorithm for implementing parallel scan anyways. It essentially builds a series of 3D buffers, each having half the dimension of the previous level, and each value containing the sum of the amounts in each underlying cells, with the top cube being just a single value, the total amount of cells.
Then instead of doing the second pass where you figure out what index thread is supposed to write to, and writing it to a buffer, you just simply drill down into said cubes and figure out the index at the invocation of the meshing part by looking at your thread index (lets say 1526), and looking at the 8 smaller cubes (okay, cube 1 has 516 entries, so 1100 to go, cube 2 has 1031 entries, so 69 to go, cube 3 has 225 entries, so we go to cube 3), and recursively repeat until you find the index. Since all threads in a group tend go into the same cubes, all threads tend to read the same bits of memory until getting down to the bottom levels, making it very GPU cache friendly (divergent reads kill GPGPU perf).
Forgive me if I got the technical terminology wrong, I haven't actually worked on GPGPU in more than a decade, but it's fun to not that something that I did cca 2011 as an undergrad is suddenly relevant again (in which I implemented HistoPyramids from a 2007ish paper, and Marching Cubes, an 1980s algorithm). Everything old is new again.