Weird alpha values when doing texture lookup

I am stuck here. I am doing line drawings with GLSL shaders. The algorithm finds valleys in intensity in an image and sets the alpha value of gl_FragColor according to the steepness of the valley. When I render directly to the screen, everything is fine. However, when I try to run a median filter over the result of the previous step rendered to a texture, the alpha values are not as expected. As you can see in the images below, lines that are supposed to be completely opaque become somewhat transparent.


This becomes even more obvious when I’m using smoothstep interpolation. Lines that are supposed to be opaque become completely transparent.


I tested both on Linux and Windows, on a NVIDIA and ATI card, and the result is always the same. It also happens regardless of whether I am using framebuffer objects or reading the framebuffer with glCopyTexImage2D. When I map the texture on a full screen quad, it looks correct.
I commented out everything in the median filter shader except for this line:

gl_FragColor= vec4(vec3(texture2D(contours, (gl_FragCoord.xy)/textureSize).a), 1.0);

The relevant parts of the original code can be found here:
alpha bug
Any help would be greatly appreciated

Does really no one have an idea? If you don’t understand something, just ask.

However, when I try to run a median filter over the result of the previous step rendered to a texture, the alpha values are not as expected. As you can see in the images below, lines that are supposed to be completely opaque become somewhat transparent.

Well sorry but I don’t see anything particular, just big pretty pictures.
Can you show more concise and artificial images, highlighting what you get and what you expected instead ?
Try working on color instead of alpha, to make the process easier to debug visually.

Well then, perhaps I didn’t explain well what you see in the images. Image #1 shows what I get when rendering to the screen directly. This it what it is supposed to look like. Image #2 shows what I get when passing the same image to a shader when all I do is reading the alpha value from the image and writing it to gl_FragColor using this line of code

 gl_FragColor= vec4(vec3(texture2D(contours, (gl_FragCoord.xy)/textureSize).a), 1.0);

As you can see, the lines that are supposed to be black/opaque become brighter/semi-transparent.
Images #3 and #4 show a more extreme example. Again, image #3 is what it’s supposed to look like. But in image #4 the black/opaque lines become white/fully transparent

Perhaps you could discuss the technique you’re using and the results you expect to see. Some background reference material might be helpful too, something that demonstrates what you’re after (unless what you’re doing is novel in some way).

So if I understand you correctly, the line information is stored in the texture alpha component? Everywhere alpha is zero you are on a line and elsewhere it is white?
Is possible to have screenshots with a background color other than white? White on white… very hard to see something.

Are texture and screen size the same?

Thank you for your answers, and your time.
The technique I’m implementing is called Suggestive Contours. The paper can be found here: Suggestive Contours for conveying Shape
I’m implementing the image space version. It works by coding the dot product of the view vector and the normal in an image, and then detecting valleys in intensity in that image. I am setting the alpha value according to the difference between the brightest pixel in a neighborhood and the center pixel, and according to how many pixels in the neighborhood are strictly darker than the center pixel. When there is a steep valley, alpha is set to 1.0. When there is no line at all, it is set to 0.0.

I made simplified versions of the code I’m using

uniform sampler2D suggestiveContourTexture;
uniform float textureSize;
uniform bool useSmoothstep;
uniform float smoothstepStart, smoothstepEnd;
uniform vec4 color;
           
void main(void) { 
 
  float pixel, center;
  float depth;
  float max= 0.0;
  float darker= 0.0;
  float numPixels= 9.0; 
  int i;

  vec2 offsets[8];
  
  offsets[0]= vec2(-1.0, -1.0);
  offsets[1]= vec2(-1.0,  0.0);
  offsets[2]= vec2(-1.0,  1.0);
  offsets[3]= vec2( 0.0, -1.0);
  offsets[4]= vec2( 0.0,  1.0);
  offsets[5]= vec2( 1.0, -1.0);
  offsets[6]= vec2( 1.0,  0.0);
  offsets[7]= vec2( 1.0,  1.0);	
  	      
  center = texture2D(suggestiveContourTexture, gl_FragCoord.xy/textureSize).x;

  for(i=0; i<8; i++){
    pixel = texture2D(suggestiveContourTexture, (gl_FragCoord.xy+offsets[i])/textureSize).x;
    if(pixel<center) darker++;
    if(pixel>max) max= pixel;
  }  

  float alpha= (1.0-(darker/numPixels))*(max-center);  

  if(useSmoothstep) alpha= smoothstep(smoothstepStart, smoothstepEnd, alpha);
  gl_FragColor= vec4(color.xyz, alpha*color.a);	
}

This part works fine. The problem comes in when I’m trying to run a median filter over the result, as suggested in the paper.


uniform sampler2D contours;
uniform float textureSize;
uniform float threshold;
uniform vec4 color;

 
#define s2(a, b)		temp = a; a = min(a, b); b = max(temp, b);
#define mn3(a, b, c) 		s2(a, b); s2(a, c);
#define mx3(a, b, c)		s2(b, c); s2(a, c);

#define mnmx3(a, b, c)		mx3(a, b, c); s2(a, b);                                   // 3 exchanges
#define mnmx4(a, b, c, d)	s2(a, b); s2(c, d); s2(a, c); s2(b, d);                   // 4 exchanges
#define mnmx5(a, b, c, d, e)	s2(a, b); s2(c, d); mn3(a, c, e); mx3(b, d, e);           // 6 exchanges
#define mnmx6(a, b, c, d, e, f) s2(a, d); s2(b, e); s2(c, f); mn3(a, b, c); mx3(d, e, f); // 7 exchanges


          
void main(void) { 
/*
  float depth;
  float temp, pixel;
  float minDepth= 1.0;

  float pixels[9];
  int i;
  
  vec2 offsets[9];
  
  offsets[0]=   vec2(-1.0, -1.0);
  offsets[1]=   vec2(-1.0,  0.0);
  offsets[2]=   vec2(-1.0,  1.0);
  offsets[3]=   vec2( 0.0, -1.0);
  offsets[4]=   vec2( 0.0,  0.0);	
  offsets[5]=   vec2( 0.0,  1.0);
  offsets[6]=   vec2( 1.0, -1.0);
  offsets[7]=   vec2( 1.0,  0.0);
  offsets[8]=   vec2( 1.0,  1.0);	
  

  vec2 offset;

  // find minimum and maximum
  for(i=0; i<9; i++){
    pixels[i]= 1.0-texture2D(contours, (gl_FragCoord.xy+offsets[i])/textureSize).a;  
  }  

// Starting with a subset of size 6, remove the min and max each time
  mnmx6(pixels[0], pixels[1], pixels[2], pixels[3], pixels[4], pixels[5]);
  mnmx5(pixels[1], pixels[2], pixels[3], pixels[4], pixels[6]);
  mnmx4(pixels[2], pixels[3], pixels[4], pixels[7]);
  mnmx3(pixels[3], pixels[4], pixels[8]);


  gl_FragColor= vec4(color.xyz, pixels[4]);
*/	
  gl_FragColor= vec4(vec3(texture2D(contours, (gl_FragCoord.xy)/textureSize).a), 1.0);
}

The median filter itself also works fine. The problem is when I am trying to read back the suggestive contours’ alpha value from a texture in the fragment shader. I have commented out all lines except for the readback to demonstrate this.

If it helps, here is my Render to Texture class (it’s written in Java):


package npr;

import processing.core.*;
import javax.media.opengl.*;
import com.sun.opengl.util.*;

public class RenderToTexture{

    private int colorTexture, depthTexture, previousFramebuffer;

    private int frameBuffer;

    private int width= 0;
    private int height= 0;

    private int wrap_s = GL.GL_REPEAT;    
    private int wrap_t = GL.GL_REPEAT;
    private int mag_filter = GL.GL_LINEAR;
    private int min_filter = GL.GL_LINEAR;

    // is set by the renderer calling setParent in its constructor
    private NPR renderer;
    // is set by the renderer calling setParent in its constructor
    private static PApplet parent;   

    private GL gl;

    private boolean fboSupported= false;

    private int[] ids = {0};

    /**
     *  creates a frame buffer object that is bound to a texture of a power of 2 size fitting the screen
    */
    public RenderToTexture(){

	renderer= (NPR)parent.g;
	gl= renderer.gl;

	//Get all supported extensions
	String extensions = gl.glGetString(GL.GL_EXTENSIONS);     
	fboSupported= (extensions.indexOf("GL_EXT_framebuffer_object") != -1);
	
	width= height= calculateSize();
	
	if(fboSupported){
	    gl.glGenFramebuffersEXT(1, ids, 0);
	    frameBuffer = ids[0];
	}

	attachColorBuffer();
	attachDepthBuffer();
	
	if(fboSupported){
	    gl.glBindFramebufferEXT(GL.GL_FRAMEBUFFER_EXT, frameBuffer);
	    checkStatus();
	    gl.glBindFramebufferEXT(GL.GL_FRAMEBUFFER_EXT, 0);
	} 
	
	if(!fboSupported){
	    gl.glGenTextures(1, ids, 0);
	    previousFramebuffer = ids[0];
	    System.out.println("Framebuffer Objects not supported");
	}	
    }


    /**
     *  creates a frame buffer object that is bound to a texture of the specified size and format
     * @param width should be power of 2 for performance reasons
     * @param height should be power of 2 for performance reasons  
    */
    public RenderToTexture(int width, int height){

	renderer= (NPR)parent.g;
	gl= renderer.gl;

	//Get all supported extensions
	String extensions = gl.glGetString(GL.GL_EXTENSIONS);     
	fboSupported= (extensions.indexOf("GL_EXT_framebuffer_object") != -1);
	
	if(!fboSupported && (width>parent.width || height>parent.height)) {
	    this.width= this.height= calculateSize();
	}
	else{
	    this.width =width;
	    this.height =height;
	}

	if(fboSupported){
	    gl.glGenFramebuffersEXT(1, ids, 0);
	    frameBuffer = ids[0];
	}

	attachColorBuffer();
	attachDepthBuffer();
	
	if(fboSupported){
	    gl.glBindFramebufferEXT(GL.GL_FRAMEBUFFER_EXT, frameBuffer);
	    checkStatus();
	    gl.glBindFramebufferEXT(GL.GL_FRAMEBUFFER_EXT, 0);
	} 
	
	if(!fboSupported){
	    gl.glGenTextures(1, ids, 0);
	    previousFramebuffer = ids[0];
	    System.out.println("Framebuffer Objects not supported");
	}
    }


    public boolean fboSupported(){

	return fboSupported;
    }


    public int getWidth(){

	return width;
    }


    public int getHeight(){

	return height;
    }

    public int getWrapS(){

	return wrap_s;
    }


    public int getWrapT(){

	return wrap_t;
    }


    public int getMinFilter(){

	return min_filter;
    }


    public int getMagFilter(){

	return mag_filter;
    }

    public void setWrapS(int wrap_s){

        this.wrap_s= wrap_s;
	bindColorTexture();
	gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_WRAP_S, wrap_s);
	bindDepthTexture();
	gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_WRAP_S, wrap_s);
	unbind();
    }


    public void setWrapT(int wrap_t){

	 this.wrap_t= wrap_t;
	 bindColorTexture();
	 gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_WRAP_T, wrap_t);
	 bindDepthTexture();
	 gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_WRAP_T, wrap_t);
	 unbind();
    }


    public void setMinFilter(int min_filter){

	this.min_filter= min_filter;
	bindColorTexture();
	gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, min_filter);
	bindDepthTexture();
	gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, min_filter);
	unbind();
    }


    public void setMagFilter(int mag_filter){

	this.mag_filter= mag_filter;
	bindColorTexture();
	gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, mag_filter);
	bindDepthTexture();
	gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, mag_filter);
	unbind();
    }
 
    protected static void setParent(PApplet applet){
	parent= applet;
    }


    private int calculateSize(){
	int size= 1;
        while(size*2<=parent.width && size*2<=parent.height){
            size*=2;
        }
	return size;
    }


    private void attachColorBuffer(){

	renderer= (NPR)parent.g;
	gl= renderer.gl;

	gl.glGenTextures( 1, ids, 0 );
	colorTexture = ids[0];

	gl.glPixelStorei(GL.GL_PACK_ALIGNMENT, 1);
	gl.glPixelStorei(GL.GL_UNPACK_ALIGNMENT, 1);
	
	if(fboSupported){

	    gl.glBindTexture(GL.GL_TEXTURE_2D, colorTexture);

	    gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_WRAP_S, wrap_s);
	    gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_WRAP_T, wrap_t);
	
	    gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, min_filter);	
	    gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, mag_filter);

	    gl.glTexImage2D(GL.GL_TEXTURE_2D, 0, GL.GL_RGBA, width, height, 0, GL.GL_RGBA, GL.GL_UNSIGNED_BYTE, null);

	    gl.glBindTexture(GL.GL_TEXTURE_2D, 0);

	    gl.glBindFramebufferEXT(GL.GL_FRAMEBUFFER_EXT, frameBuffer);
	    
	    gl.glFramebufferTexture2DEXT(GL.GL_FRAMEBUFFER_EXT, GL.GL_COLOR_ATTACHMENT0_EXT, GL.GL_TEXTURE_2D, colorTexture, 0);

	    gl.glBindFramebufferEXT(GL.GL_FRAMEBUFFER_EXT, 0);	
	}
    }

    private void attachDepthBuffer(){

	renderer= (NPR)parent.g;
	gl= renderer.gl;

	gl.glGenTextures(1, ids, 0);
	depthTexture = ids[0];

	gl.glPixelStorei(GL.GL_PACK_ALIGNMENT, 1);
	gl.glPixelStorei(GL.GL_UNPACK_ALIGNMENT, 1);

	if(fboSupported){
	    gl.glBindTexture(GL.GL_TEXTURE_2D, depthTexture);

	    gl.glTexParameteri( GL.GL_TEXTURE_2D, GL.GL_TEXTURE_WRAP_S, wrap_s );
	    gl.glTexParameteri( GL.GL_TEXTURE_2D, GL.GL_TEXTURE_WRAP_T, wrap_t );
	    gl.glTexParameteri( GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, mag_filter );
	    gl.glTexParameteri( GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, min_filter );
	    
	    gl.glTexImage2D(GL.GL_TEXTURE_2D, 0, GL.GL_DEPTH_COMPONENT16, width, height, 0, GL.GL_DEPTH_COMPONENT, GL.GL_UNSIGNED_SHORT, null);
	    
	    gl.glBindTexture(GL.GL_TEXTURE_2D, 0);
	    
	    gl.glBindFramebufferEXT(GL.GL_FRAMEBUFFER_EXT, frameBuffer);
       
	    //attach depth texture to framebuffer
	    gl.glFramebufferTexture2DEXT(GL.GL_FRAMEBUFFER_EXT, GL.GL_DEPTH_ATTACHMENT_EXT, GL.GL_TEXTURE_2D, depthTexture, 0);

	    gl.glBindFramebufferEXT(GL.GL_FRAMEBUFFER_EXT, 0);
	}
    }

    /**
     * all rendering operations are now bound to the FrameBuffer
     */
    public void begin(){
	
	renderer= (NPR)parent.g;
	gl= renderer.gl;

	if(fboSupported){
	    gl.glBindFramebufferEXT(GL.GL_FRAMEBUFFER_EXT, frameBuffer);
	}
	else{
	    gl.glEnable(GL.GL_TEXTURE_2D);
	    gl.glBindTexture(GL.GL_TEXTURE_2D, previousFramebuffer);
	    gl.glCopyTexImage2D(GL.GL_TEXTURE_2D, 0, GL.GL_RGB, 0, 0, parent.width, parent.height, 0);
	    gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_WRAP_S, GL.GL_CLAMP);
	    gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_WRAP_T, GL.GL_CLAMP);
	    gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, GL.GL_LINEAR);
	    gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR);
	    unbind();
	}

	gl.glEnable (GL.GL_DEPTH_TEST);
	gl.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT);
	gl.glDepthMask(true);
	gl.glDepthFunc(GL.GL_LEQUAL);

	gl.glPushAttrib(GL.GL_VIEWPORT_BIT);
	gl.glViewport(0,0,width, height);
    }

    /**
     * all rendering operations are now returned to the previous buffer (probably GL_BACK)
    */
    public void end(){
	
	renderer= (NPR)parent.g;
	gl= renderer.gl;
	
	if(fboSupported){
	    gl.glBindFramebufferEXT( GL.GL_FRAMEBUFFER_EXT, 0 );
	}
	else{

	    bindColorTexture();
	    gl.glCopyTexImage2D(GL.GL_TEXTURE_2D, 0, GL.GL_RGBA, 0, 0, width, height, 0);
	    
	    gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_WRAP_S, wrap_s);
	    gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_WRAP_T, wrap_t);
	    gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, mag_filter);
	    gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, min_filter);	  
	    
	    bindDepthTexture();
	    gl.glCopyTexImage2D(GL.GL_TEXTURE_2D, 0, GL.GL_DEPTH_COMPONENT16, 0, 0, width, height, 0);
	    
	    gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_WRAP_S, wrap_s);
	    gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_WRAP_T, wrap_t);
	    gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, mag_filter);
	    gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, min_filter);

	    gl.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT);

	    gl.glBindTexture(GL.GL_TEXTURE_2D, previousFramebuffer);
	    gl.glColor3f(1.0f, 1.0f, 1.0f);
	    renderer.fullScreenQuad(1.0f);
	    unbind();	
	}
	gl.glPopAttrib();
    }


    /**
     * applys the color texture to GL_TEXTURE_2D operations
     */
    public void bindColorTexture(){

	renderer= (NPR)parent.g;
	gl= renderer.gl;

	gl.glEnable(GL.GL_TEXTURE_2D);
	gl.glBindTexture(GL.GL_TEXTURE_2D, colorTexture);
    }

    /**
     * applys the depth texture to GL_TEXTURE_2D operations
     */
    public void bindDepthTexture(){

	renderer= (NPR)parent.g;
	gl= renderer.gl;
	    
	gl.glEnable(GL.GL_TEXTURE_2D);
	gl.glBindTexture(GL.GL_TEXTURE_2D, depthTexture);
    }

    /**
     * disables the texture
     */
    public void unbind(){

	renderer= (NPR)parent.g;
	gl= renderer.gl;

	gl.glBindTexture(GL.GL_TEXTURE_2D, 0);
	gl.glDisable(GL.GL_TEXTURE_2D);
    }

    public void checkStatus(){

	renderer= (NPR)parent.g;
	gl= renderer.gl;

	int status= gl.glCheckFramebufferStatusEXT(GL.GL_FRAMEBUFFER_EXT);
	switch(status) {
        case GL.GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT_EXT:
            System.out.println("FrameBufferObject incomplete, incomplete attachment");
            break;
        case GL.GL_FRAMEBUFFER_UNSUPPORTED_EXT:
            System.out.println("Unsupported FrameBufferObject format");
	    break;
        case GL.GL_FRAMEBUFFER_INCOMPLETE_MISSING_ATTACHMENT_EXT:
            System.out.println("FrameBufferObject incomplete, missing attachment");
            break;
        case GL.GL_FRAMEBUFFER_INCOMPLETE_DIMENSIONS_EXT:
            System.out.println("FrameBufferObject incomplete, attached images must have same dimensions");
	    break;
        case GL.GL_FRAMEBUFFER_INCOMPLETE_FORMATS_EXT:
	    System.out.println("FrameBufferObject incomplete, attached images must have same format");
          break;
        case GL.GL_FRAMEBUFFER_INCOMPLETE_DRAW_BUFFER_EXT:
            System.out.println("FrameBufferObject incomplete, missing draw buffer");
	    break;
        case GL.GL_FRAMEBUFFER_INCOMPLETE_READ_BUFFER_EXT:
            System.out.println("FrameBufferObject incomplete, missing read buffer");
	    break;
	}
    }
} 

And here is the Suggestive Contours class where the setup of Textures is done


package npr;

import processing.core.*;

import javax.media.opengl.*;
import com.sun.opengl.util.*;

import java.nio.ShortBuffer;


public class SuggestiveContours extends Contours{


    GLSL suggestiveContourShader, findSuggestiveContours3x3, median3x3;

    RenderToTexture suggestiveContourTexture, foundSuggestiveContours, foundMedian;

    float smoothstepStart= 0.0f, smoothstepEnd= 1.0f;

    boolean useMedian= false, useSmoothstep= false;

    public SuggestiveContours(){

	suggestiveContourTexture= new RenderToTexture();
	foundSuggestiveContours= new RenderToTexture();
	foundMedian= new RenderToTexture();

	suggestiveContourShader=new GLSL();
        suggestiveContourShader.loadVertexShader("suggestiveContourShader.vert");
        suggestiveContourShader.loadFragmentShader("suggestiveContourShader.frag");
	suggestiveContourShader.useShaders();	

	
	findSuggestiveContours3x3=new GLSL();
        findSuggestiveContours3x3.loadVertexShader("findSuggestiveContours.vert");
        findSuggestiveContours3x3.loadFragmentShader("findSuggestiveContours3x3.frag");
	findSuggestiveContours3x3.useShaders();
	
	median3x3=new GLSL();
        median3x3.loadVertexShader("median.vert");
        median3x3.loadFragmentShader("median3x3.frag");
        median3x3.useShaders();
    }

    public void useSmoothstep(boolean useSmoothstep){

	this.useSmoothstep= useSmoothstep;
    }

    public void setSmoothstepStart(float start){

	this.smoothstepStart= start;
    }
 
    public void setSmoothstepEnd(float end){

	this.smoothstepEnd= end;
    }
    

    public void useMedian(boolean useMedian){
	this.useMedian= useMedian;
    }
   

    public void preProcess(){

	
	renderer= (NPR)parent.g;
	gl= renderer.gl;
	
	// render scene with diffuse light at camera position

	gl.glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
	suggestiveContourTexture.begin();
	    
	suggestiveContourShader.startShader();

	int location= suggestiveContourShader.getUniformLocation("cameraPos");
	suggestiveContourShader.setFloatVec3(location, renderer.eyeX, renderer.eyeY, renderer.eyeZ);

	render();

	suggestiveContourShader.endShader();
	    
	suggestiveContourTexture.end();
    	
	if(useMedian){ 
	    foundSuggestiveContours.begin();
	    findSuggestiveContours();
	    foundSuggestiveContours.end();	    
	}   
    }


    public void postProcess(){

	renderer= (NPR)parent.g;
	gl= renderer.gl;

	if(useMedian){
	    findMedian();
	}
	else{
	    findSuggestiveContours();
	}
    }


    protected void findSuggestiveContours(){
    
	renderer= (NPR)parent.g;
	gl= renderer.gl;

	int location;

	findSuggestiveContours3x3.startShader();

	suggestiveContourTexture.bindColorTexture();

	location= findSuggestiveContours3x3.getUniformLocation("smoothstepStart");
	findSuggestiveContours3x3.setFloat(location, smoothstepStart);
	     
	location= findSuggestiveContours3x3.getUniformLocation("useSmoothstep");
	findSuggestiveContours3x3.setBoolean(location, useSmoothstep);
	
	location= findSuggestiveContours3x3.getUniformLocation("smoothstepEnd");
	findSuggestiveContours3x3.setFloat(location, smoothstepEnd);
	        
	location= findSuggestiveContours3x3.getUniformLocation("textureSize");
	findSuggestiveContours3x3.setFloat(location, (float)suggestiveContourTexture.getWidth());

	location= findSuggestiveContours3x3.getUniformLocation("color");
	findSuggestiveContours3x3.setFloatVec4(location, red, green, blue, alpha);
	
	render();
       	
	findSuggestiveContours3x3.endShader();

	suggestiveContourTexture.unbind();		
    }


    protected void findMedian(){
    
	renderer= (NPR)parent.g;
	gl= renderer.gl; 

	int location;

	median3x3.startShader();
			    
	foundSuggestiveContours.bindColorTexture();

	location= median3x3.getUniformLocation("textureSize");
	median3x3.setFloat(location, (float)foundSuggestiveContours.getWidth());

	location= median3x3.getUniformLocation("color");
	median3x3.setFloatVec4(location, red, green, blue, alpha);
	
	render();
	
	median3x3.endShader();

	suggestiveContourTexture.unbind();
    }
}

I’m sorry that it’s only black and white, but I’m displaying alpha values here, and there is no colour in alpha values.

Any help will be greatly appreciated!

If you need more code, e.g. my GLSL class, tell me, and I will post it

And, yes screen size and texture size are identical

Uhmm… what about AA and AF filtering? Check driver control panel. Try to set nearest filtering on textures. Maybe because of some reason, fetched values is filtered so you are loosing alpha channel precision.

Set filtering to GL_NEAREST, but the problem still remains

I set filtering to GL_NEAREST, but it didn’t help
Still no ideas?