Drawing lines in OpenGL & 3d hardware in general has always had some of these issues. The best support for GL/OpenGL line drawing I knew of was in the more expensive SGI machines. (Capitalized on purpose to indicate before they rebranded lowercase -- but this is fuzzy memory and I'm no sgi history expert.) The cheaper ones' lines definitely weren't as good, but SGIs did have pretty nice line support back in the day, better than what you get in WebGL today.
One thing I didn't see mentioned here is the line drawing API calls are typically not as well optimized as the triangle mesh calls, or so I've heard. Part of the reason good line drawing support was more expensive was (allegedly) only a few customers truly need antialiased lines with performance as good as the mesh API, and it's extra silicon, lines have specific needs not shared with meshes.
FWIW, I've tried all these approaches in production and ended up just doing the meshing myself, and avoiding shader tricks. It's not that bad, it gives the most control, and once you have an abstraction for it, you don't need to think about it again.
For us screen space worked a bit better but I'm sure it varies by use case. Getting the right fall-off on non-aligned pixels is always tricky(including 0.5f offset in Direct3D).
It's pretty much impossible to draw pixel-perfect lines in OpenGL. Even the library code of SDL2 fails to do it in certain instances, when its 2D drawing functions use OpenGL - and SDL2 is super clean and solid like a rock. Sometimes, you're even better off using images (!!) to draw lines. The sad thing is that Bresenham's line algorithm is so very simple. Somewhere between my program and the GPU, the communication of where exactly the line starts and ends is lost.
> The naïve algorithm in float averages 4.81 µs, Bresenham’s algorithm averages at 1.84 µs, my fixed point variation at 1.74 µs. The Fixed point implementation runs about 5% faster. Which isn’t all that much, but still better than Bresenham’s; and much better than the naïve version using mixed floats and integers.
> The situation is similar on the 64 bits machine, but the advantage of fixed point vanishes. Both methods takes very similar times: fixed point averages at 0.85 µs and Bresenham 0.84 µs, a difference of about 1%. However, the naïve implementation is still very far behind, at 2.96 µs.
When you said "easily beaten" I kind of expected more than just a 1-5% performance improvement.
There is also another (potential) problem with fixed point: it works by adding up rounded numbers:
> The only thing we need, is to compute the slope m as a fixed point number rather than a floating point number.
As a result, the fixed point might create rendering artefacts from adding up a rounded number repeatedly. This makes the whole thing an apples-to-oranges comparison:
- floating point method that does multiply/divide every iteration
- fixed point that adds a rounded fraction
- Bresenham that does unrounded fraction adding by splitting the fraction into an accumulator and divisor (Bresenham is actually really simple primary school math with some geometry on top)
To make a truly fair comparison, we should add a version that precomputes the floating point fraction, and adds that each iteration; I suspect the main slowdown of the naive floating point algorithm is the repeated multiply/divides more than the cast to integer, not the casts.
Now that you mention it, I know the basic "add/sub is faster than multiply is much faster than divide" order, but I have no idea how it compares to casts.. why isn't that ever mentioned anywhere?
I'm guessing for the same reason casts are slow; they're not bottlenecks for the most common number crunching workloads, so it doesn't occur to anyone they might be bottlenecks for somewhat less common ones.
I was also thinking this might be about how one draws a straight line. If you found those links interesting, you might also enjoy How Round is Your Circle[0].
Fun fact, Bosch's Axial Glide miter saw uses a Sarrus linkage rather than slides to generate it's straight line motion. I thought it was kind of a neat application for the oldest straight line linkage, especially since it didn't get much traction when it was first invented. It turns out for most application, a planar linkage is preferable.
I actuallly thought it was going to be about setting boundaries in the workplace or something similar and was amused that it was actually about drawing lines. I'm interested in both topics, so it was still a win!
It is a less refined version of the techniques in the article, the approach is different in that it uses post-processing to provide the line thickness.
Seeing as how the line drawing function in X doesn't do any anti-aliasing, the result would likely be very consistent across different environments even for fat lines. The problems to some extent are differences of opinion about how to make lines look pretty.
Sorry, i meant that it is buggy. Specifically the line\* drawing functions. I filled a 22x22 window with segmented lines and it blanked the window. A line across the leftmost pixels on the window (x=0) behaves differently then anywhere else. And such.
At least i think, as there is no hard spec on X11.
One thing I didn't see mentioned here is the line drawing API calls are typically not as well optimized as the triangle mesh calls, or so I've heard. Part of the reason good line drawing support was more expensive was (allegedly) only a few customers truly need antialiased lines with performance as good as the mesh API, and it's extra silicon, lines have specific needs not shared with meshes.
FWIW, I've tried all these approaches in production and ended up just doing the meshing myself, and avoiding shader tricks. It's not that bad, it gives the most control, and once you have an abstraction for it, you don't need to think about it again.