Multipass rendering with framebuffer

I’m trying to implement a multipass render using framebuffers (so i need firstly to render the scene on a texture using a shader, than i need to use the result texture as input in another shader, and render with this last one).
What i have done is the following:
I created a framebuffer, an empty texture and attached it to the framebuffer

glGenFramebuffers(1, &this->_id);
glBindFramebuffer(GL_FRAMEBUFFER, this->_id);

// create a color attachment texture
glGenTextures(1, &texID);
glBindTexture(GL_TEXTURE_2D, texID);
glTexImage2D(
                GL_TEXTURE_2D, 
                0, 
                this->_internalTextureFormat, 
                _frameBufferSize.width, 
                _frameBufferSize.height, 
                0, 
                this->_textureFormat, 
                this->_pixelValueType, 
                NULL
            );
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texID, 0);
}

Now, from what i understood, everything i render after attaching a texture to the framebuffer, gets rendered on the texture.
So in the main loop i do:

glBindFramebuffer(GL_FRAMEBUFFER, this->_id);
glViewport(0, 0, this->_frameBufferSize.width, this->_frameBufferSize.height);
glEnable(GL_DEPTH_TEST); // enable depth testing (is disabled for rendering screen-space quad)

glClearColor(refreshColor.coordinates.x, refreshColor.coordinates.y, refreshColor.coordinates.z, refreshColor.coordinates.w);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

RenderScene();

With this code i rendered the scene on the texture, so the texture now contains the scene.
Now i’m not sure what to do. I tried implement something like this, right after the RenderScene():

tfmShader->use();
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texID);

glBindFramebuffer(GL_FRAMEBUFFER, 0);
glDisable(GL_DEPTH_TEST); // disable depth test so screen-space quad isn't discarded due to depth test.
glViewport(0, 0, _frameBufferSize.width, _frameBufferSize.height);
glClearColor(1.0f, 1.0f, 1.0f, 1.0f); 
glClear(GL_COLOR_BUFFER_BIT);

In “tfmShader” i implemented a simple filter to the screen texture with id texID.
Then after binding the default framebuffer, i render the screen quad with a shader that simply shows the result.

this->_frameBufferShader->use();
glBindVertexArray(_screenQuadVAO);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texID);
glDrawArrays(GL_TRIANGLES, 0, 6);

What i get from this is the scene rendered on the texture, without the filter applied with the first shader.
I’m not sure what i need to do, to apply a filter to the texture and use the result as input on a shader.

I also tried to use two textures and binding / unbinding them to the framebuffer, but nothing worked as expected.

Thanks

Well, you say you’re having problems with getting your shaders to be applied. But you don’t show the GL code you’re using to set them up (create them, bind them to the context, populate uniforms – in particular the sampler uniform, listing your shader source code, etc.). So readers don’t really have much to go on here. Though it does sound like you’ve at least got “some” shader working.

That said, I don’t see any draw calls issued when you have tfmShader active. That would be what applies the tfmShader to render content to “something”. In this case, to the window. It appears that, before you actually do this, you bind another shader in place of it (this->_frameBufferShader) which is what’s actually used in the subsequent glDrawArrays() call to render the content to the window framebuffer. Perhaps you want to get rid of this last shader bind and just use the tfmShader bind?

You are right, sorry for the bad question, it’s my all time second question on a forum and i’m a noob at it.

So, this is where i create the two shaders and set the uniforms (this is done before all code wrote on the previous question)

_frameBufferShader = new Odysseus::Shader(".\\Shader\\frameBufferShader.vert", ".\\Shader\\frameBufferShader.frag");
this->tfmShader = new Odysseus::Shader(".\\Shader\\frameBufferShader.vert", ".\\Shader\\tfmShader.frag");

_frameBufferShader->use();
_frameBufferShader->setInt("screenTexture", 0);

tfmShader->use();
tfmShader->setInt("texture1", 0);
initializeScreenQuad();

The use() function perform the glUseProgram() function and the setInt() function set the uniforms like this:

void Shader::setInt(const std::string &name, int value) const
{ 
    glUniform1i(glGetUniformLocation(ID, name.c_str()), value); 
}

The initializeScreenQuad() method, simply create and set screen VAO and VBO like this:

void FrameBuffer::initializeScreenQuad()
    {
        const float screenVertices[] = {
            // positions   // texCoords
            -0.3f,  1.0f,  0.0f, 1.0f,
            -0.3f,  0.7f,  0.0f, 0.0f,
             0.3f,  0.7f,  1.0f, 0.0f,
    
            -0.3f,  1.0f,  0.0f, 1.0f,
             0.3f,  0.7f,  1.0f, 0.0f,
             0.3f,  1.0f,  1.0f, 1.0f
        };

        glGenVertexArrays(1, &this->_screenQuadVAO);
        glGenBuffers(1, &this->_screenQuadVBO);
        glBindVertexArray(this->_screenQuadVAO);
        glBindBuffer(GL_ARRAY_BUFFER, this->_screenQuadVBO);
        glBufferData(GL_ARRAY_BUFFER, sizeof(screenVertices), &screenVertices, GL_STATIC_DRAW);
        glEnableVertexAttribArray(0);
        glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 4 * sizeof(float), (void*)0);
        glEnableVertexAttribArray(1);
        glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 4 * sizeof(float), (void*)(2 * sizeof(float)));
    }

The framebufferShader.vert code (the vertex shader i use for both shaders) is:

#version 450 core

layout (location = 0) in vec2 aPos;
layout (location = 1) in vec2 aTexCoords;

out vec2 Frag_UV;

//This is set in a framebuffer callback method used while rendering
//the gui, but it gets set correctly
uniform mat4 projection;

void main()
{
    Frag_UV = aTexCoords;
    gl_Position = projection * vec4(aPos.xy, 0.0, 1.0); 
}

The fragment shader of the first shader (tfmShader) is:

#version 450 core

out vec4 fragColor;
in vec2 Frag_UV;

uniform sampler2D texture1;

void main()
{
    fragColor = vec4(vec3(1.0) - texture(texture1, Frag_UV).rgb, 1.0);
}

So what i want here is just rendering the screen texture with inverted colors.
The fragment shader of the second shader (framebufferShader) is:

#version 450 core

out vec4 fragColor;
  
in vec2 Frag_UV;

uniform sampler2D screenTexture;
void main()
{ 
    fragColor = vec4(texture(screenTexture, Frag_UV).rgb, 1.0);
}

This is a really stupid example, what i want to obtain as output of the framebufferShader is the image inverted, so with the tfmShader filter applied.

I changed the code a bit and made this after the RenderScene() method:

...

RenderScene();

//Here i write to the screen texture using the tfmShader (???)
tfmShader->use();
glBindVertexArray(_screenQuadVAO);
//Here i set the sampler2D uniform to texID
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texID);
glDrawArrays(GL_TRIANGLES, 0, 6);
  
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glDisable(GL_DEPTH_TEST); // disable depth test so screen-space quad isn't discarded due to depth test.
glViewport(0, 0, _frameBufferSize.width, _frameBufferSize.height);
glClearColor(1.0f, 1.0f, 1.0f, 1.0f); 
glClear(GL_COLOR_BUFFER_BIT);
  
this->_frameBufferShader->use();
glBindVertexArray(_screenQuadVAO);
//Here i set the sampler2D uniform to texID
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texID);
glDrawArrays(GL_TRIANGLES, 0, 6);

This still don’t work, but i think i’m confused.

The core of what i’m trying to do is to use two shaders (one that applies a filter to a texture, and the second one that take the filtered texture as input, to perform other work).
Maybe it’s not the correct approach, but this is what came to my mind.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.