The OP's 180 lines of C++ could be rewritten to similar count of lines in HLSL or GLSL. The resulting code would render in realtime while also consuming less electricity.
Indeed, I don't see why people like this. In modern world, doing graphics on CPU is very inefficient.
You miss the point. The point is not doing it efficiently, but explaining the concepts without being distracted by getting it running on specific hardware and the like. Read the introduction to the overall series.
Have you read the linked article? It says "I want to have a simple stuff applicable to video games."
> without being distracted by getting it running on specific hardware
Just target Windows and use Direct3D, 99% of PC game developers do just that. The last GPU that didn't support D3D feature level 11.0 was intel sandy bridge from 2011. Everything newer then that supports 11.0, and unlike OpenGL with it's extensions, the majority of features are mandatory. Very rarely I saw compatibility issues across GPUs in recent years, and when I did it was a driver bug.
The algorithm is applicable to video games. As you yourself point out it'd not be a lot of effort to rewrite. I suggest you read the very next paragraph, which ends:
> I do not pursue speed/optimization at all, my goal is to show the underlying principles.
...
> Just target Windows and use Direct3D, 99% of PC game developers do just that.
Misses the point of the series. From the introduction (linked at the top of the page):
> I do not want to show how to write applications for OpenGL. I want to show how OpenGL works. I am deeply convinced that it is impossible to write efficient applications using 3D libraries without understanding this.
The exact same could be said for Direct3D. He gives his students a class to read/write TGA images and set pixels for that article, which should make it exceedingly clear that the point is to ensure the focus is the algorithm and no irrelevant details to teach the principles without people being sidelined by worrying about libraries, and differences between platforms and the like.
And in any case, I explained to you what the appeal of this to people here is. That you think it could be done differently does not change that the appeal to people is exactly that there are no dependencies like Direct3D or Windows or anything else (I don't have Windows anywhere, so for me that would have made it relatively uninteresting; as it would for a lot of other people here). I don't care about the performance; I care about the concepts.
> I am deeply convinced that it is impossible to write efficient applications using 3D libraries without understanding this.
What he explains is almost irrelevant for efficiency. Other things are relevant: early Z, tiled rendering, other things about architecture of GPUs: resource types, cache hierarchy, fixed-function pipeline steps, these warps, many others.
> I care about the concepts.
I've been programming C++ for living since 2000, about half of that time something relevant to 3D graphics, both games and CAD. You no longer need deep understanding of rasterizer. Vague understanding of what the hardware does, and how to control it, is enough already. Only people working in companies like nVidia or Chaos Group need that info, IMO.
Yes, irrelevant, because the article is simply describing how to combine a sphere+bumps+noise to look like an explosion.
Your project seems to be interesting in its own way, but I don't see why you'd juxtapose it with this tutorial, other than that they both involve pixels.
The project was just an example of HN bias against GPUs. At least for doing graphics on GPUs.
Try searching "GPU" on this site. 100% of the first page of results are about using GPGPU in the clouds. Do you think that matches what people buy GPUs for, or amount of code developers white for them?
I searched "GPU" and almost all of the top results were people doing weird/unusual things with the GPU: terminal emulator, Postgres query acceleration, stripped down HTML engine, APL compiler, etc. A search of "Direct3D" suggests HN doesn't have much interest in Direct3D, though.
> weird/unusual things with the GPU: terminal emulator, Postgres query acceleration, stripped down HTML engine, APL compiler
Exactly. People here mostly do general-purpose computations on them, despite “G” stands for graphics.
Search for “Graphics”, and the majority of top results are for CPU-based stuff, pixel graphics in terminal, python, Skia. There’re some results for GPU-based graphics, like graphic studies for MGS and GTA, but they’re minority.
I think graphics is how the majority of users use their hardware for, but it’s under-represented here.