Anti-Aliased Lines

p.s. My code looks like this :

Vertex Program :


varying vec4 color;
varying vec4 posix;
uniform vec2 viewport;

void main(void)
{
   posix = ftransform();
   gl_Position = posix + vec4(0.0, 0.0, 0.0, 0.001);
   vec4 v4 = gl_ModelViewMatrix * gl_Vertex ;
   gl_ClipVertex = v4;

   color = gl_Color;
}

Fragment Program:


varying vec4 color;
varying vec4 posix;
uniform vec2 viewport;

void main( void )
{ 
   vec2 tmp_scr1=vec2(viewport.x/2.0,viewport.y/2.0);
   vec2 tmp_scr2=vec2(viewport.x/2.0+0.5,viewport.y/2.0+0.5);
   vec2 fake_FragCoord = (posix.xy/posix.w)*tmp_scr1+tmp_scr2;

   vec4 lineColor = color;

   vec2 ScreenSizeHalf = vec2(viewport.x/2.0,viewport.y/2.0);
   vec2 posix1= ((posix.xy/posix.w)+1.0)*ScreenSizeHalf;
   vec2 fpos = fake_FragCoord.xy;
   float dist = distance(fpos,posix1);
   lineColor.a = 1.0-dist;
   gl_FragColor = lineColor;
}

The basic multiplicative and additive blending will ignore the alpha value you compute (the one that makes the antialiasing effect). So, either you multiply gl_FragColor.rgb by gl_FragColor.a , or set such a blend-func that varies the multiplication/addition factor by the src-alpha component.

Long story short, when doing additive blending,
gl_FragColor.rgb *= gl_FragColor.a; // at end of frag shader

when doing multiplicative blending:
gl_FragColor.rgb = gl_FragColor.rgbgl_FragColor.a + vec3(1.0,1.0,1.0)(1.0 - gl_FragColor.a); // again at end of frag shader. Can be optimized.

I just copied Ilian sample code (added .0 to integer values) and it worked out of the box.
http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=249059#Post249059

Hmm, you are right. I removed the fake fragcoord code that I added (to make things work on ATI) and the code works much better using gl_FragCoord.

So… can anyone tell me what is wrong with my fake fragCoord code? I got it from another thread here on the boards.

I have tried both of the following (and neither workds correctly) …


vec3 fake_FragCoord;
fake_FragCoord.x = ((posix.x/posix.w)*0.5+0.5)*viewport.x;
fake_FragCoord.y = ((posix.y/posix.w)*0.5+0.5)*viewport.y;
fake_FragCoord.z = 0.5*posix.z/posix.w+0.5;


vec2 tmp_scr1=vec2(viewport.x/2.0,viewport.y/2.0);
vec2 tmp_scr2=vec2(viewport.x/2.0+0.5,viewport.y/2.0+0.5);
vec2 fake_FragCoord = (posix.xy/posix.w)*tmp_scr1+tmp_scr2;

Well, when you make like this with fake frag_coord for lines, you should always get the same points, as ftransform interpolates. You won’t get any real anti-aliasing. Why? Because lines has only 2 points, and the thing is interpolation is done not between 4 points of actual quad, but between these 2 ends. A bit complicated, I have poor english, sorry.
Make quads, not war! © I would recommend you using simple textrued quads for this, and you wouldn’t get problems with correct attribute interpolation.

Plus, the antialiasing quality will be perfect.

I understand why using a quad would work better for interpolation but I am confused as to how I get the vertices to send to the card?

All my lines are stored in large arrays and drawn using either glDrawElements() or glMultiDrawArrays() (I used both in different spots for different things). Now adding 2 more vertices per line to the array (to form a quad) is not a big deal… but which 2 vertices do I add?

Do the quads not need to be screen aligned and 2 pixels wide? In which case as you rotated the scene (or moved the camera in any way) would you not need to constantly update the vertex array? Won’t that completely KILL performance?

Am I missing something?

Approach is rather simple.
Imagine, you have only 1 line segment (for simplicity). You want to draw it with quad in 2 pixels (independent from zooming).
You make quad, where first 2 vertices hold the same position of 1st segment vertex, last 2 holds 2nd segment vertex. But how do you make offsets?
With each of these 2 vertices you save side direction (equals to segment’s normal, with different signs for left and right), and on VS you displace your position in that direction (but also in plane, perpendicular to viewer position) by distance, proportional to W coordinate of line’s vertex (it is 1 for ortho and Z for perspective projection), also multiplied by some screen clarity (how may units in one pixel on near plane).
For your application, where all lines seem to be planar and you have ortho projection, such complicated path may be simplified though.
There are issues with texturing, but they be handled via perspective projection, or via barycentric coordinates (if you don’t want your texture to give falsy distance appearance, which is caused by perspective transform).

Don’t suppose you have any example code? Assuming not, let me see if I have the right idea as I am confused in some spots.

First, assume I have a line segment (0,1,0) to (0,-1,0) that I want to draw.

Ok, given that, I now have my two vertices which I will duplicate to make my quad. However, what do you mean by segment normal? It is underdefined no?

If you think of the line segement as a vector the you could consider it as the normal of an infinite plane in space. In which case then the normal to the segment would be any vector that lies in that plane correct? So which do I choose?

Once I have my normal, I would draw a quad with the following vertices and normals (note: I would really use a VBO but lets keep this simple for now)…


glBegin(GL_QUADS);
      glVertex3f( 0.0, -1.0, 0.0 );
      glNormal3f( ??,   ??,  ?? );
      glVertex3f( 0.0, -1.0, 0.0 );
      glNormal3f( ??,   ??,  ?? );

      glVertex3f( 0.0,  1.0, 0.0 );
      glNormal3f( ??,   ??,  ?? );
      glVertex3f( 0.0,  1.0, 0.0 );
      glNormal3f( ??,   ??,  ?? );
glEnd();

Then in my vertex shader, I use basically the same code I have been using but I add some code to read the vertex and normal and then project the normal into a plane perpendicular to the view position and offset the vertex along that normal by 2 pixels?

and everything else stays the same?

Hello.
Yes, if you’re working in 3D, so you have to make your segment normals also lying in plane, planar to projection plane. Doing it so makes your quads to be correctly projected.
If you have general case - moving camera and perspective non-centered projection, you may end up with doing homogenous divisions in vertex shader and offseting in direction, perpendicular to screen segment’s direction. You can do it totally on VS.
Good reading is nVidia example about volumetric lines. You don’t need all this tricky volumetric texturing, but you can look therem how do they make quads on VS.
http://developer.download.nvidia.com/SDK/9.5/Samples/3dgraphics_samples.html#cg_VolumeLine