At the beggining of the program I bind a fragment shader for all geometry in the scene. When I enable GL_BLEND and turn on GL_LINE_SMOOTH nothing happens. Is OpenGL antialiasing somehow influenced by the shader? I can’t find any solution to this.
I’d be thankful for your help!
ATI chip? I think I’ve read somewhere on this forum that this combination didn’t work on an ATI implementation.
Yup, read here:
Yes, ATI indeed. Thanks for the link, Relic!
Actually, I don’t suffer from any performance hits when using GL_LINE_SMOOTH. It simply doesn’t work at all when I use it with my fragment shader!
I’ve got a question which might be important - when does GL’s antialiasing take place (in the pipeline)? Together with all pixel operations or maybe somewhere further (where nasty little shaders can’t get it) ?
Fragment shaders just calculate colors and depth values per fragment. The coverage calculation and blending needed for final antialiased pixels follows later in the pipe.
So, if I understand correctly, shaders shouldn’t have impact on how antialiasing works in OpenGL? If so, it has to be some ATI drivers issue.
Anyway, just for being sure, here is how I use antialiasing (works fine without fragment program binded) :
Correct. If the shaders write the expected alpha values this should just work.
In the glLineWidth(3.0) case below you need to check if the OpenGL implemementation supports that line width by querying the
GL_SMOOTH_LINE_WIDTH_RANGE (and maybe GL_SMOOTH_LINE_WIDTH_GRANULARITY if you’re interested).
The spec only asks for min and max line widths of 1.0 and I’ve seen implementations not supporting wider lines.