Color distortion rendering at certain positions

I guess a picture says it all.. Basically, I’m working on a GUI for an application that acts pretty much like a standard gnome2/windows OS. Currently there’s no text or anything, but that’s not an issue just yet. As you can see, the exit buttons all fail miserably to render (They are supposed to have Xs in them) and when I move the windows to the top of the screen, they too begin to massively distort (I would bet they distort when they are on the bottom too, it’s just not as pronounced). At first I thought the issue was that I was rendering things without any Z positioning, so they were having trouble rendering two things that occupy the same space, but when I removed the background I got the same results, so apparently that wasn’t the problem. So, is this a common problem and does anyone know what could possibly cause it?

Here’s how I render the objects (I’m using python by the way):

        MainVertexData = numpy.array([self.x,self.y,
        FullWindowVertices = numpy.array([0,1,2,3],numpy.ushort)
        self.MainVertexData = MakeBuffer(GL_ARRAY_BUFFER,MainVertexData,len(MainVertexData)*4)
        self.FullWindowVertices = MakeBuffer(GL_ELEMENT_ARRAY_BUFFER,FullWindowVertices,len(FullWindowVertices)*2)
        self.BaseProgram = compileProgram(compileShader(ReadFile("Shaders/Mainv.glsl"),
        self.Tex = TexFromPNG("Themes/theme/texture.png")

That is run during the instantiation of each object and never run again. Moving objects update their “MainVertexData” (After deleting the old one), but that’s it. It uses these helper functions:

def TexFromPNG(filename):
    img = # .jpg, .bmp, etc. also work
    img = ImageOps.flip(img)
    img_data = numpy.array(list(img.getdata()), 'B')
    texture = glGenTextures(1)
    glBindTexture(GL_TEXTURE_2D, texture)
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, img.size[0], img.size[1], 0, GL_RGBA, GL_UNSIGNED_BYTE, img_data)
    return texture
def MakeBuffer(target, data, size):
    TempBuffer = glGenBuffers(1)
    glBindBuffer(target, TempBuffer)
    glBufferData(target, size, data, GL_STATIC_DRAW)
    return TempBuffer
def ReadFile(filename):
    shaderfile = open(filename,'r')
    source =
    return source

The actual rendering code is as follows:

        glViewport(0, 0, self.width, self.height)      #  
        glEnable(GL_TEXTURE_2D)#These lines only get done at the beginning of rendering
        #for each object, do this:
        pos = glGetAttribLocation(self.BaseProgram, "position")
        glBindTexture(GL_TEXTURE_2D, self.Tex)
        glUniform1i(glGetUniformLocation(self.BaseProgram,"texture"), 0)

        glDisable(GL_TEXTURE_2D)#Do this once after all the objects have been drawn

So yeah, if you need anything else (shader codes etc) let me know. Thanks a lot.

(P.S. I’m almost certain there’s a better way to go about rendering than that, so any guidance in that regards would be much appreciated as well)

It looks like a texcoord problem, or mipmap selection problem.
Do you think the texture is zoomed in a lot, or just blurred ?
By the way I do not see how/where you define the texture coordinates for each vertex ?
Seeing the shader accessing the texture would help too.

? There’s a reason I posted in the beginner’s section, I don’t understand that…

I don’t know, it doesn’t really look like either, since those colors are not in the texture, the texture is just a gray to white gradient.

Is that different than the “MainVertexData” and “FullWindowVertices” (Admittedly, that’s a terrible name, but it’s the index of the values within “MainVertexData”, that’s a name inherited from when I was just trying to render the BG image). If so, explain if you would.

This is the fragment shader:

#version 110

uniform sampler2D texture;

varying vec2 texcoord;

void main()
    gl_FragColor = texture2D(texture, texcoord);

And this is the vertex shader:

#version 110

attribute vec2 position;

varying vec2 texcoord;

void main()
    gl_Position = vec4(position, 0.0, 1.0);
    texcoord = position * vec2(0.5) + vec2(0.5);

? There’s a reason I posted in the beginner’s section, I don’t understand that…
A forum is useful when you don’t yet know what/where to search for your problem.
Once you get some obscure keywords, it is very easy to search the web with something like “opengl mipmap” and find useful answers :

If colors does not look at all like your texture, then your image loader has a problem, or does not send the correct data to OpenGL in the gltexImage2D call.

Sorry I find your program very hard to read, not being fluent in Python certainly does not help.
Can’t you do a simpler program with explicit values for position and texcoord ?
You should know also that texture coordinates should be within 0 and 1 to include a complete image in a quad.

Hrm, I realize now that came off as kind of rude. I apologize for that, as that wasn’t my intention at all. Thank you for the link, I’ll read up on that.

I don’t think I was completely clear on what’s going on. All the windows you see in the picture are the same object, using the same texture. I was trying to demonstrate that the texture looks correct when the windows are on the lower half of the screen, but the texture actually changes when I drag the window to the top of the screen. A video would explain it better I guess… I can’t see how that would have to do with the texture loading, since the texture loading is done exactly once, and not while I’m moving the windows.

If you mean by doing it the “old opengl way”, I’d rather not. I’m trying to avoid using deprecated functions. If you just mean replacing my buffers that use variable positions with static positions, yeah, that I can do.

Indeed the whole self.x+self.w,self.y+self.h etc is not really explicit :slight_smile:

Ok. Well, the background image uses this for getting it’s coordinates:

MainVertexData = numpy.array([-1,-1,1,-1,-1,1,1,1],numpy.float32)

Which means that it stretches across the whole thing. That shows up fine however. In fact, it appears that is the only object that does render correctly all over the screen. Then what happens is the background image parents a taskbar, which parents a menu, etcetera. Everything’s positions are defined within an XML file as their position in relation to their parent, e.g. a window has a window border, which sets it’s position relative to the window, where the left edge of the window is -1 and the right edge is +1. self.x is the object’s horizontal position in relation to the window (-1,-1 being the bottom left, 1,1 being the top right), self.y is the object’s vertical position in relation to the window, self.w is the width of the object, and self.h is the height of the object.

Regardless, if it helps, you can think of the window rendering for “MainVertexData” as:

MainVertexData = numpy.array([-1,0,0,0,-1,1,0,1],numpy.float32)

That gives the same basic results with static values.

If there’s anything else that needs clarifying, let me know.

Also: something else came to mind, the pictures I am using for the textures are very small (10 pixels wide and 24 pixels tall) and stretched quite wide (about 1/4 of the way across my 1920x1080 monitor). I was reading about mipmaps as you suggested and it seems like that might cause a problem. Is that a problem?

Your texture coordinates are being made based on position, not on the actual widget. If you moved your objects around the screen, it would probably look like a window into your texture.

[Edit: the vertex shader is making the texture coordinates based on position in space, not on position in each widget]

I think that’s part of your problem.


That’s spot on. Tested that and it worked just as you said. So yeah, I guess I’m going to have to pass position and size into the vertex shader somehow or another. Thanks a lot, I think I can figure it out from here, and I’ll post what I get when I’m done.

Edit: Alright, here’s my solution, I changed my vertex shader to:

#version 110

attribute vec4 position;

attribute vec2 relativeposition;

varying vec2 texcoord;

void main()
    gl_Position = position;
    texcoord = relativeposition;

Where relative position is always the array: [0,0,1,0,0,1,1,1], as defined by this:

        RelativeVertexData = numpy.array([0,0,1,0,0,1,1,1],
        self.RelativeVertexData = MakeBuffer(GL_ARRAY_BUFFER,RelativeVertexData,len(RelativeVertexData)*4)
        #compile the program in here somewhere
        rpos = glGetAttribLocation(self.BaseProgram, "relativeposition")

Which is… Better… It’s still grabbing garbage values for some parts of the textures (usually the top), but at least it grabs the same garbage values regardless of the widgets position on screen. Anything in there immediately pop out as incorrect to anyone?

Edit2: Never mind, that algorithm works great, it turns out that some of my pngs were RGB while others were RGBA. No idea why, but I just had to edit my texture loading routine to take the image mode into account. This is now solved, thanks for all your help.