Modern Opengl Line Stipple issue

Hi

I am trying to implement the Line Stipple in OpenGL because the Line stipple API is deprecated. It is working fine for lines and Polygons but It is not working for Circle. is there any problem with the below shaders? I see the pattern coming up fine for lines and polygons. But not working for circles. But when I zoom in case of circle I see the pattern. Don’t know what is the issue. Please let me know . Thanks in advance.

Below is my code for vertex and fragment shader.

vertex shader
const char* vertexShaderSource =
		"#version 420 core \n" \
		"layout (location = 0)in vec4 aPos; \n" \

		"flat out vec3 startPos; \n " \
		"out vec3 vertPos; \n " \

		"uniform mat4 mvp; \n" \
		"void main()\n" \
		"{\n" \
		"  vec4 pos  = mvp * vec4(aPos.x, aPos.y, 0.0, 1.0);\n" \
		"  gl_Position = pos; \n" \
		"  vertPos = pos.xyz/pos.w; \n" \
		"  startPos = vertPos; \n" \
		"}\n";

Fragment Shader 

const char* fragmentShaderSource =
		"#version 420 core\n" \
		"flat in vec3 startPos;\n" \
		"in vec3 vertPos;\n" \
		"uniform vec4 my_color;\n" \
		"uniform vec2 u_resolution; \n"\
		"uniform uint u_pattern;\n" \
		"uniform float u_factor; \n" \
		"uniform int stipple; \n" \
		"out vec4 color;\n" \
		"void main()\n" \
		"{\n" \
		"if (stipple == 1) \n" \
		"{ \n" \
		"vec2 dir = (vertPos.xy-startPos.xy) * u_resolution/2.0; \n" \
		"float dist = length(dir); \n" \
		"uint bit = uint(round(dist/u_factor)) & 15U; \n" \
		"if((u_pattern & (1U<<bit)) == 0U) \n" \
		"   discard; \n" \
		"} \n" \
		"color = my_color;\n" \
		"}\n";
		
Below is the code where I am setting the uniform values.

if (vModel->getLineStipple() > 1 )
					{
						glUniform1i(stip, 1);

						GLint loc_res = glGetUniformLocation(shaderProgram, "u_resolution");
						GLint loc_pattern = glGetUniformLocation(shaderProgram, "u_pattern");
						GLint loc_factor = glGetUniformLocation(shaderProgram, "u_factor");
	
						GLushort pattern = vModel->getLineStipple(); 
				                GLfloat factor = vModel->getStippleFactor();
						
                                                glUniform1ui(loc_pattern, pattern);
						glUniform1f(loc_factor, factor);
						glUniform2f(loc_res, _scrWidth, _scrHeight);
					}
                                       then draw function
                                       ...
                               switch (vModel->getDrawMode())
				{
				case 0: //GL_POINTS
						//glPointSize(6.0);
					glDrawArrays(GL_POINTS, 0, vModel->getVertices().size());
					break;

				case 1: //GL_LINES
						//glEnable(GL_LINE_SMOOTH);
						//GLint range[2];
						//glGetIntegerv(GL_ALIASED_LINE_WIDTH_RANGE,range);
					glDrawArrays(GL_LINES, 0, vModel->getVertices().size());
					break;

				case 2: //GL_LINE_LOOP
					glDrawArrays(GL_LINE_LOOP, 0, vModel->getVertices().size());
					break;

				case 3: //GL_LINE_STRIP
					glDrawArrays(GL_LINE_STRIP, 0, vModel->getVertices().size());
					break;
          }

OpenGL doesn’t have “circles”. The issue is with line strips where the individual segments are short, and arises because the distance is measured from the start of each individual line segment, not from the start of the strip. If the segments are long compared to the repeat length, you probably won’t notice this; if they’re shorter than the repeat length, you will notice it.

There’s no way to avoid this issue using the approach you’re taking (i.e. putting all of the logic in the vertex and fragment shaders). Each line segment is rendered independently without knowledge of preceding segments, yet the starting offset must take into account the total length of the preceding segments in order to match the legacy behaviour (glLineStipple).

Calculating the accumulated length either has to be done on the CPU or using a multi-pass compute shader (search for “parallel prefix sum” for a GPU-friendly algorithm). However you do it, there’s likely to be a significant performance cost due to synchronisation unless you make the (significant) effort to avoid that.

Basically, stippling was something which was relatively easy to do on older hardware. Modern hardware has vastly better performance through massive parallelism. Most of the rendering pipeline is readily parallelisable, but stippling isn’t.

Personally, I’d suggest seeing if screen-space stippling is sufficient for your purposes: instead of calculating the distance from a vertex, just use either gl_FragCoord.x or gl_FragCoord.y depending upon whether the line segment is closer to horizontal or vertical (i.e. whether the x or y component of abs(dir.xy) is greater). This will result in artefacts when the slope changes between horizontal and vertical, but they may be less noticeable than what you have now.

It’s parallelizable when hardware accelerated.

how to use gl_FragCoord.x or gl_FragCoord.y in my shader? Can u give me some code sample?

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.