Having problems converting a canny algorithm to a shader

Not sure this is the right forum for GLSL, let me know if not.

I am currently working on an implementation of canny to run in a shader.
I’m using the processing language.
First I produce a gaussian then a sobel. The sobel store the gradient values of each pixel in the r and g channel respectively, then the blue channel is used for the magnitude or intensity.

It works fine in java/processing code, however produces issues when the code is converted to glsl shader.

#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif

#define PROCESSING_TEXTURE_SHADER;

uniform sampler2D texture;
uniform float mult;

varying vec4 vertColor;
varying vec4 vertTexCoord;

uniform vec2 resolution;

float map(float value, float min1, float max1, float min2, float max2) {
  return min2 + (value - min1) * (max2 - min2) / (max1 - min1);
}

void main(void) {
  float x = 1.0 / resolution.x;
  float y = 1.0 / resolution.y;

  float PI = 3.14159265359;
  vec4 horizEdge = vec4( 0.0 );
  horizEdge -= texture2D( texture, vec2( vertTexCoord.x - x, vertTexCoord.y - y ) ) * 1.0;
  horizEdge -= texture2D( texture, vec2( vertTexCoord.x - x, vertTexCoord.y     ) ) * 4.0;
  horizEdge -= texture2D( texture, vec2( vertTexCoord.x - x, vertTexCoord.y + y ) ) * 1.0;
  horizEdge += texture2D( texture, vec2( vertTexCoord.x + x, vertTexCoord.y - y ) ) * 1.0;
  horizEdge += texture2D( texture, vec2( vertTexCoord.x + x, vertTexCoord.y     ) ) * 4.0;
  horizEdge += texture2D( texture, vec2( vertTexCoord.x + x, vertTexCoord.y + y ) ) * 1.0;

  vec4 vertEdge = vec4( 0.0 );
  vertEdge -= texture2D( texture, vec2( vertTexCoord.x - x, vertTexCoord.y - y ) ) * 1.0;
  vertEdge -= texture2D( texture, vec2( vertTexCoord.x    , vertTexCoord.y - y ) ) * 4.0;
  vertEdge -= texture2D( texture, vec2( vertTexCoord.x + x, vertTexCoord.y - y ) ) * 1.0;
  vertEdge += texture2D( texture, vec2( vertTexCoord.x - x, vertTexCoord.y + y ) ) * 1.0;
  vertEdge += texture2D( texture, vec2( vertTexCoord.x    , vertTexCoord.y + y ) ) * 4.0;
  vertEdge += texture2D( texture, vec2( vertTexCoord.x + x, vertTexCoord.y + y ) ) * 1.0;

  vec3 edge = sqrt((horizEdge.rgb * horizEdge.rgb) + (vertEdge.rgb * vertEdge.rgb));
  float x1 = (horizEdge.r+horizEdge.g+horizEdge.b)/3.0;
  float y1 = (vertEdge.r+vertEdge.g+vertEdge.b)/3.0;
	float m = 0.5;
	float n = 1.0;
	float x2 = map(x1,0.0-m,m,0.0,n);
	float y2 = map(y1,0.0-m,m,0.0,n);
  float sum = (edge.r+edge.g+edge.b)/3.0;
	//gl_FragColor = vec4(n-x2,n-y2,1.0-sum*mult,1.0);
	gl_FragColor = vec4(x2,y2,1.0-sum*mult,1.0);
	//gl_FragColor = vec4(1.0-x1,1.0-y1,1.0-sum*mult,1.0);
}

which produces this image.
image

I then use a canny algorithm to thin the lines, its not perfect but its ok for now.

void getNeighboursMax2(int x, int y,float t) {

boolean k = false;
int p = x + y * img.width;
float c = brightness(sobel.pixels[p]);
float g 
= gradient[x][y];
//= red(sobelG.pixels[p]);
//= atan2(red(sobely.pixels[p]),red(sobelx.pixels[p]));
    
    float q = 255;
    float r = 255;
    int p1 = 0;
    
   //angle 0
   if ((0 <= g && g < (22.5))){
        if((x+1)+(y)*img.width<img.pixels.length&&(x+1)+(y)*img.width>=0)q = brightness(sobel.pixels[(x+1)+(y)*img.width]);
        if((x-1)+(y)*img.width<img.pixels.length&&(x-1)+(y)*img.width>=0)r = brightness(sobel.pixels[(x-1)+(y)*img.width]);
    //angle 45
    }else if ((22.5) <= g && g < (67.5)){
        if((x+1)+ (y+1)*img.width<img.pixels.length&&(x+1)+ (y+1)*img.width>=0)q = brightness(sobel.pixels[(x+1)+ (y+1)*img.width]);
        if((x-1)+ (y-1)*img.width<img.pixels.length&&(x-1)+ (y-1)*img.width>=0)r = brightness(sobel.pixels[(x-1)+ (y-1)*img.width]);
    //angle 90
    }else if ((67.5) <= g && g< (112.5)){
        if((x)+ (y+1)*img.width<img.pixels.length&&(x)+ (y+1)*img.width>=0)q = brightness(sobel.pixels[(x)+ (y+1)*img.width]);
        if((x)+ (y-1)*img.width<img.pixels.length&&(x)+ (y-1)*img.width>=0)r = brightness(sobel.pixels[(x)+ (y-1)*img.width]);
    //angle 135
    }else if ((112.5) <= g && g < (157.5)){
        if((x-1)+(y+1)*img.width<img.pixels.length&&(x-1)+(y+1)*img.width>=0)q = brightness(sobel.pixels[(x-1)+(y+1)*img.width]);
        if((x+1)+(y-1)*img.width<img.pixels.length&&(x+1)+(y-1)*img.width>=0)r = brightness(sobel.pixels[(x+1)+(y-1)*img.width]);
    //angle 180
    }else if ((157.5) <= g && g < (202.5)){
        if((x-1)+(y)*img.width<img.pixels.length&&(x-1)+(y)*img.width>=0)q = brightness(sobel.pixels[(x-1)+(y)*img.width]);
        if((x+1)+(y)*img.width<img.pixels.length&&(x+1)+(y)*img.width>=0)r = brightness(sobel.pixels[(x+1)+(y)*img.width]);
    //angle 225
    }else if ((202.5) <= g && g < (247.5)){
        if((x-1)+(y-1)*img.width<img.pixels.length&&(x-1)+(y-1)*img.width>=0)q = brightness(sobel.pixels[(x-1)+(y-1)*img.width]);
        if((x+1)+(y+1)*img.width<img.pixels.length&&(x+1)+(y+1)*img.width>=0)r = brightness(sobel.pixels[(x+1)+(y+1)*img.width]);
    //angle 270
    }else if ((247.5) <= g && g < (292.5)){
        if((x-1)+(y+1)*img.width<img.pixels.length&&(x-1)+(y+1)*img.width>=0)q = brightness(sobel.pixels[(x-1)+(y+1)*img.width]);
        if((x+1)+(y-1)*img.width<img.pixels.length&&(x+1)+(y-1)*img.width>=0)r = brightness(sobel.pixels[(x+1)+(y-1)*img.width]);
    //angle 315
    }else if ((292.5) <= g && g < (337.5)){
        if((x)+(y-1)*img.width<img.pixels.length&&(x)+(y-1)*img.width>=0)q = brightness(sobel.pixels[(x)+(y-1)*img.width]);
        if((x)+(y+1)*img.width<img.pixels.length&&(x)+(y+1)*img.width>=0)r = brightness(sobel.pixels[(x)+(y+1)*img.width]);
    //angle 360
    }else if ((337.5) <= g && g <= (360)){
        if((x+1)+(y)*img.width<img.pixels.length&&(x+1)+(y)*img.width>=0)q = brightness(sobel.pixels[(x+1)+(y)*img.width]);
        if((x-1)+(y)*img.width<img.pixels.length&&(x-1)+(y)*img.width>=0)r = brightness(sobel.pixels[(x-1)+(y)*img.width]);
    }
    //println(q,r);
    if (q<t&&(brightness(sobel.pixels[x+y*img.width]) <= q) && (r<t&&brightness(sobel.pixels[x+y*img.width]) <= r)) sobelMax.pixels[x+y*img.width] = 0;
    else sobelMax.pixels[x+y*img.width] = color(255);

};

and this produces the following image.

again not perfect but a great start, lines can be tweaked by changing some parameters, however when I try converting the code to a glsl shader I get issues I’m unsure how to resolve.

// Adapted from:
// <a href="http://callumhay.blogspot.com/2010/09/gaussian-blur-shader-glsl.html" target="_blank" rel="nofollow">http://callumhay.blogspot.com/2010/09/gaussian-blur-shader-glsl.html</a>

#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif

#define PROCESSING_TEXTURE_SHADER;

uniform sampler2D texture;
uniform float thresh;

varying vec4 vertColor;
varying vec4 vertTexCoord;

uniform vec2 resolution;

float map(float value, float min1, float max1, float min2, float max2) {
  return min2 + (value - min1) * (max2 - min2) / (max1 - min1);
}

void main(void) {

  float x = 1.0 / resolution.x;
  float y = 1.0 / resolution.y;
  float PI = 3.14159265359;

  float q = 1.0;
  float r = 1.0;
  vec4 p1 = vec4( 0.0 );
  vec4 p2 = vec4( 0.0 );
  float m = 2.0;
	float n = 1.0;
  vec4 mCol = texture2D( texture, vec2( vertTexCoord.x, vertTexCoord.y ) );
	//float x1 = map(mCol.r,0.0,m,0.0-n,n);
	//float y1 = map(mCol.g,0.0,m,0.0-n,n);
	float x1 = mCol.r;
	float y1 = mCol.g;
  float a1 = atan(y1,x1);
	float myGrad = map(a1,0.0-PI,PI,0.0,360.0);
  float myCol = mCol.b;
	
	  
		if ((0 <= myGrad && myGrad < (22.5))){
			p1 = texture2D( texture, vec2( vertTexCoord.x + x, vertTexCoord.y ) ) * 1.0;
			p2 = texture2D( texture, vec2( vertTexCoord.x - x, vertTexCoord.y ) ) * 1.0;
			q = p1.b;
			r = p2.b;
		}
		//angle 45
		else if ((22.5) <= myGrad && myGrad < (67.5)){
			p1 = texture2D( texture, vec2( vertTexCoord.x + x, vertTexCoord.y + y) ) * 1.0;
			p2 = texture2D( texture, vec2( vertTexCoord.x - x, vertTexCoord.y - y) ) * 1.0;
			q = p1.b;
			r = p2.b;
		}
		//angle 90
		else if ((67.5) <= myGrad && myGrad< (112.5)){
			p1 = texture2D( texture, vec2( vertTexCoord.x, vertTexCoord.y + y) ) * 1.0;
			p2 = texture2D( texture, vec2( vertTexCoord.x, vertTexCoord.y - y) ) * 1.0;
			q = p1.b;
			r = p2.b;
		}
		//angle 135
		else if((112.5) <= myGrad && myGrad < (157.5)){
			p1 = texture2D( texture, vec2( vertTexCoord.x - x, vertTexCoord.y + y) ) * 1.0;
			p2 = texture2D( texture, vec2( vertTexCoord.x + x, vertTexCoord.y - y) ) * 1.0;
			q = p1.b;
			r = p2.b;
		}
		//angle 180
		else if ((157.5) <= myGrad && myGrad < (202.5)){
			p1 = texture2D( texture, vec2( vertTexCoord.x - x, vertTexCoord.y ) ) * 1.0;
			p2 = texture2D( texture, vec2( vertTexCoord.x + x, vertTexCoord.y ) ) * 1.0;
			q = p1.b;
			r = p2.b;
		}
		//angle 225
		else if ((202.5) <= myGrad && myGrad < (247.5)){
			p1 = texture2D( texture, vec2( vertTexCoord.x - x, vertTexCoord.y - y) ) * 1.0;
			p2 = texture2D( texture, vec2( vertTexCoord.x + x, vertTexCoord.y + y) ) * 1.0;
			q = p1.b;
			r = p2.b;
		}
		
		//angle 270
		else if ((247.5) <= myGrad && myGrad < (292.5)){
			p1 = texture2D( texture, vec2( vertTexCoord.x - x, vertTexCoord.y + y) ) * 1.0;
			p2 = texture2D( texture, vec2( vertTexCoord.x + x, vertTexCoord.y - y) ) * 1.0;
			q = p1.b;
			r = p2.b;
		}
		//angle 315
		else if ((292.5) <= myGrad && myGrad < (337.5)){
			p1 = texture2D( texture, vec2( vertTexCoord.x, vertTexCoord.y - y) ) * 1.0;
			p2 = texture2D( texture, vec2( vertTexCoord.x, vertTexCoord.y + y) ) * 1.0;
			q = p1.b;
			r = p2.b;
		}
		//angle 360
		else if ((337.5) <= myGrad && myGrad <= (360)){
			p1 = texture2D( texture, vec2( vertTexCoord.x + x, vertTexCoord.y ) ) * 1.0;
			p2 = texture2D( texture, vec2( vertTexCoord.x - x, vertTexCoord.y ) ) * 1.0;
			q = p1.b;
			r = p2.b;
		}
		
		if((r<thresh&&myCol<=r)&&(q<thresh&&myCol<=q)){
			//gl_FragColor = vec4(1.0-myCol,1.0-myCol,1.0-myCol,1.0);
			gl_FragColor = vec4(0.0,0.0,0.0,1.0);
		}else{
			gl_FragColor = vec4(1.0 ,1.0,1.0,1.0);
		}
		
};

this produces the following output.

I’m fairly new to shaders, I’ve only been using them for a month, so I’m not sure if there is something obvious I’m missing.

Thanks in advance.

apologies I can only seem to post one image as a new user…

processing/java output

Screen Shot 10-29-20 at 04.10 PM

shader output
Screen Shot 10-29-20 at 04.09 PM

Are the texture filters set to GL_NEAREST? Alternatively, you might want to use texelFetch instead of texture2D.

Texture filters are set to whatever the default is as I haven’t set it to anything.

What’s the difference between texel_Fetch and texture2D?

The defaults are GL_NEAREST_MIPMAP_LINEAR for GL_TEXTURE_MIN_FILTER and GL_LINEAR for GL_TEXTURE_MAG_FILTER. Both of these will perform linear interpolation if the sample location isn’t in the centre of the texel.

texelFetch retrieves the value of a specific texel using integer texture coordinates (i.e. array indices).

Thanks for your reply, it seems thats probably what I’m looking for, would you mind providing an example for to convert
texture2D( texture, vec2( vertTexCoord.x + x, vertTexCoord.y ) )

to instead make use of this.

I have tried implementing it in my code with the help of examples Ive found online but it results in a all white image.
Ive tried
texelFetch(texture2D, ivec2( vertTexCoord.x, vertTexCoord.y - y ), 0);
and
texelFetch(texture2D, ivec2( vec2(vertTexCoord.x, vertTexCoord.y - y )), 0);

Given that you were previously passing vertTexCoord to texture2D, I’m assuming that it’s normalised (in the range [0,1]), so you’d need to either change the attribute data or multiply by resolution. For image processing where output pixels map directly to input pixels, it’s typical to use ivec2(gl_FragCoord.xy), as those are un-normalised window coordinates.

Yes these are normalised 0-1. ivec2(gl_FragCoord.xy) this was the example I saw online too, however in the code I’m not looking for the current pixel location, and am offseting the x and y of the pixel location required depending on the angle produced in the tan function. Can I still use gl_FragCoord.xy for this and if so how does it know how to offset. Alternatively how do I change the attribute data?

Thanks. Again I’m really new to shaders.

gl_FragCoord.xy is the integer coordinates of the output pixel (i.e. the indices where gl_FragColor will be stored). For texelFetch, the offsets should be ±1 rather than ±1/resolution.

Typically, the application stores the values in a buffer, binds the buffer to GL_ARRAY_BUFFER, then calls glVertexAttribPointer to associate the data with a vertex shader input. The vertex shader may copy the values directly to the output (vertTexCoord) or it may transform it first. It may have separate attributes for the position (written to gl_Position) and texture coordinates, or it may derive both from a single attribute. Either way, texelFetch needs coordinates which range between 0 and width-1 or height-1 (inclusive), as if you were indexing a 2D array.

But I’ve just noticed the #ifdef GL_ES; if you’re targetting ES 2.x, that doesn’t have texelFetch, so you’ll have to stick with texture2D and just ensure that you’re using GL_NEAREST for the filter modes. You’ll also have to accept that ES 2.x doesn’t guarantee much in the way of arithmetic precision for fragment shaders.

I think I’m going to have to do some more reading, because I dont know much about the pipeline or how it can be useful, and initially I thought this would be a lot simpler, but your suggestions keep pointing to more questions.

The shader might not need #ifdef GL_ES as it seems to work fine without it, I’ll have to check the processing reference to make sure this is the case.

Thanks anyways.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.