Why can't I have a FBO and another texture in my shader

So I got a frame buffer object working, but this interferes with a texture that I am trying to send to the shader.

this is my source code:

screenShader.setUniformSampler("overTexture", 1);
screenShader.setUniform("mixrate", 0.75f);
glBindTexture(GL_TEXTURE_2D, textureColorbuffer);
glDrawArrays(GL_TRIANGLES, 0, 6);

and when I sample from overTexture I get the frame buffer. When I sample from screenTexture I get all black.

You of course can. You’re doing something wrong.

Show the raw GL calls you’re making, not your wrappers.

Also, show the code you’re using for creating your FBO and textures inline with the draw code above.

Also, show your shader code. At least the sampler declaration.

While conventional (non-DSA) FBO creation with a texture render target binds a texture, this has nothing to do with the texture(s) you choose to have bound to texture units when you issue your draw call(s).

This is my fragment shader

#version 400 core
out vec4 FragColor;
in vec2 TexCoords;

uniform sampler2D screenTexture;
uniform sampler2D overTexture;

uniform float mixrate;

void main()
    vec3 samcol = texture(screenTexture, TexCoords).xyz;
    vec3 ovrcol = texture(overTexture, TexCoords).xyz;
    vec3 rescol = samcol;
    FragColor = vec4(rescol, 1);

I don’t think the vertex shader matters because I am just applying a texture to the screen.
This is something I use before.

glBindFramebuffer(GL_FRAMEBUFFER, 0); // back to default
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);


This is before the draw:

glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // we're not using the stencil buffer now

this is how I set up the frame buiffer

GLuint framebuffer;
glGenFramebuffers(1, &framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);

GLuint textureColorbuffer;
glGenTextures(1, &textureColorbuffer);
glBindTexture(GL_TEXTURE_2D, textureColorbuffer);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 1024, 768, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
glBindTexture(GL_TEXTURE_2D, 0);

// attach it to currently bound framebuffer object
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureColorbuffer, 0);

GLuint rbo;
glGenRenderbuffers(1, &rbo);
glBindRenderbuffer(GL_RENDERBUFFER, rbo);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH24_STENCIL8, 1024, 768);
glBindRenderbuffer(GL_RENDERBUFFER, 0);


if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) {
	std::cout << "ERROR::FRAMEBUFFER:: Framebuffer is not complete!" << std::endl;
	return 1;
glBindFramebuffer(GL_FRAMEBUFFER, 0);

This is closer, but we still don’t have all of the essential GL calls.


uniform sampler2D screenTexture;
uniform sampler2D overTexture;
    vec3 samcol = texture(screenTexture, TexCoords).xyz;
    vec3 ovrcol = texture(overTexture, TexCoords).xyz;
    vec3 rescol = samcol;
    FragColor = vec4(rescol, 1);

has your fragment shader declaring 2 texture samplers but only using 1 in the output (screenTexture). So the texture sample of overTexture and all code associated with it will be discarded.


we don’t know for sure what GL calls you’re making. But we can guess that you’re setting texunit “1” on the overTexture sampler (e.g. via glUniform1i()). There’s no similar setting for screenTexture, so we can only guess that you’re letting this default to texunit “0”.

If so, then your frag shader expects screenTexture, the only texture that’s used in your fragment shader, to be bound to texunit 0 when the shader is rendered.

So let’s see if you did that.

During FBO setup, you say that you did this:

What you didn’t say is which texture unit was active when you did this. We’ll presume texunit 0 (i.e. glActiveTexture( GL_TEXTURE0 )) … but we really have no idea. If texunit 0, then this leaves no 2D texture bound to texunit 0.

Then for rendering, you say that you have:

You didn’t say what texunit was active at the beginning of this snippet either. Moreover, we don’t know what GL calls are buried behind overTexture.bind(1) and overTexture.unbind(1). So we really can’t say what textures are bound to what texture units for the duration of this draw call (and associated shader executions), nor what textures are left bound to what texunits at the end of all this sequence.

Now were we to “guess” what overTexture.bind(1) did, it might be something like this:

glActiveTexture( GL_TEXTURE1 );
glBindTexture( GL_TEXTURE_2D, overTexture.glHandle );

If so, then you can see that when GL hits that next texture bind:

glBindTexture(GL_TEXTURE_2D, textureColorbuffer);

it’s gonna bind textureColorBuffer to texunit 1, in place of overTexture.

Moreover, nothing here has been bound to texunit 0. And if the FBO init code was the last time a texture was bound to texunit 0, there’s still “no 2D texture” bound to texunit 0. So we have:

  • TEXUNIT 0 = No valid 2D texture
  • TEXUNIT 1 = textureColorbuffer

What you wanted? Probably not. With these bindings, your shader then goes and tries to read the 2D texture bound to TEXUNIT 0 via screenTexture and … doesn’t work.

Anyway, this is just a guess. We don’t have all the relevant GL calls to know what’s going on here.

My guess is you meant for the texture bindings at draw call time to be:

  • TEXUNIT 0 = textureColorbuffer
  • TEXUNIT 1 = overTexture.glHandle

If I guessed correctly, and if you to find this context state-specific GL API behavior less than ideal…

Consider using DSA APIs like glBindTextureUnit(). This obviates the need for setting an active texture unit when binding textures. More on that here:

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.