[QUOTE=bsabiston;1288882]So – the lines I’m drawing are in 3D, or maybe you would call them 2.5D? …
As far as ordering, I am not quite sure how I will deal with that yet.
Right now I’m drawing all opaque objects/lines first and then drawing all the transparent ones with depth write turned off.[/QUOTE]
Ok. Sounds like 2.5D. 2D objects, but they have an implicit ordering or layering to them.
What I mean by translucency piling up is that if I draw a half-transparent line, and it crosses over itself, then the intersection is twice as opaque as the rest.
Ok. That’s true if you use additive blending and your incoming fragments aren’t opaque. However, you have control over both the blend functions and your fragment alphas.
This is not a problem so much when the line just crosses itself, but rather when triangles in tight curves overlap and create unsightly spikes. (see first two images attached). I’d like to find a way to prevent that.
Even with that choice of blend function and fragment alphas, this is only going to happen when the triangles in the mesh used to rasterize your non-overlapping line feature overlap each other. One solution is to modify how you’re generating the vertices of this strip of triangles such that the triangles don’t overlap. That’s not really too hard. First, compute your displaced vertex positions in a 2D plane. Then connect them with triangles such that no two triangles overlap. No need for a stencil buffer to fix that part of your problem.
If I am not antialiasing, then I can use a stencil buffer and then each brushstroke can use the stencil to make sure it doesn’t overwrite pixels that have already been drawn into. But with antialiasing, it is a problem, because you DO want the feathered edges to overlap. Otherwise the areas that fade to nothing will still mark the stencil and you get black edges where there is any crossover (see image 3).
Right. I’m starting to see what you might want here. It sounds what you really want is not additive blending (what it sounds like you’re doing now) but more like a “max” blending (which is supported). Just as a thought exercise imagine this: start with a black scratch framebuffer off-to-the-side which is the same size as your system framebuffer. Before rendering a line feature, forget what color you plan to draw that line feature. Instead, just render the line (using your trimesh) into the scratch framebuffer with an intensity value of 0…1 using MAX blending (0 = transparent, 1 = opaque). Then when you’re done, you’ve got 100% intensity in the core of the line (even where the line overlaps itself) and you’ve got nice fade-in partial intensity along the edges of the line (even where the line overlaps itself, with the fade-in region intensities combining using the MAX operation, avoiding the effect you describe above). Now that you have this intensity buffer (the scratch framebuffer), then you can go back and shade in all the pixels of your line into your system framebuffer. Example: for 0% intensity pixels, you have: 0% * line_color + 100% * background, and for 100% intensity pixels you add 100% * line_color + 0% background.
Leaving aside for a second that this suggests multipass, does this sound like it’d provide the look you want?
If so, then you can spend some cycles trying to think of how to make this single pass. For instance, one off-the-cuff idea is to do this “intensity buffer” generation in the alpha channel of the current framebuffer (with MAX blending). Then when blending on your line, you could potentially use destination alpha as the source of the alpha.
That’s just off-the-cuff. There’s no doubt better options. I don’t like this much because there are render state changes and full-screen clears of the alpha channel between each line (though if you think about it, you really don’t need a full-screen clear). However, you could batch all of the lines that are rendered with the same color in the same layer in this same pass, which would minimize the number of state changes and clears to just the number of line color layers you have.
Another option is use stencil to cut out the difficult areas and handle those specially. Another is to use different depths for opaque vs. translucent parts so that opaque parts always overwrite translucent parts (though you may still need something special for where translucent parts overlap, possibly such as some increasing depth slope on the translucent edges so that the right edge fragments “win” the depth test in the right areas to give you a symmetric look on where they meet.
So basically, I was thinking that if I could access an alpha buffer from the fragment shader, then as I draw lines I could write the current line alpha values to the buffer. And the shader could look at the existing values and decide whether to write or not.
Yeah, what you’re getting at is programmable blending. I believe you can do a limited form of this with Texture barriers. Basically, within a single fragment shader, read the current value, do your blending, and then write out the result. However, this doesn’t support multiple reads/writes to the same pixel/texel. For that kind of thing you need to get into writing shaders with side-effects (possibly using Image Load Store).
edit: I don’t know why the images are so small, does it resize them automatically? I uploaded much larger ones.
When you upload them to forums, yes it seems to (probably to save space). However, if you post an image link to a picture on another site, IIRC it doesn’t limit the size.