Blur a shadowmap

Hey,
I have a FBO with a shadow map texture and I wish to blur it.
What do I call glDraw on since I don’t need to really draw anything just operate on the already existing fragments of the shadow map.
Also what would my vertex shader look like?

By “shadow map texture”, do you mean a depth-format texture? Or an intensity map obtained by comparing depth values?

Generate another texture, bind that to the FBO, and render a quad (two triangles) using the original texture as a source texture.

If you have at least OpenGL 4.3, you can use a compute shader with image load/store, but this can’t be used with depth-format textures.

it’s a color attachment since it’s for variance shadow map.
Also do I need to create an output texture and bind it to the FBO or can I just use the same variance shadow map texture for read and write (hoping that fragment shader run is flushed only when all fragments are done running for this shader so no weird behavior when output gaussian blur value to a fragment and the next fragment shader iteration uses the already blurred value)

Yes.

No. The behaviour is undefined if a texture level is both bound to the current framebuffer and available for reading by the shader.

If you want to modify the texture in-place you’d need to use image load/store (and detach the texture from the FBO).

That would require additional storage equal to the size of the texture level. Which doesn’t happen. The only additional storage is a relatively small amount cache (for both read and write), so the texture bound to the framebuffer will be modified during the draw call. The caches just mean that reading from a texture which is bound to the framebuffer is unpredictable. You may get the previous value, you may get the updated value. That’s why the behaviour is undefined.

Hey thanks for helping!

So right now I have an FBO set like so:

  • RBO GL_DEPTH_ATTACHMENT
  • Texture array (for each cascade) GL_COLOR_ATTACHMENT0 with format of RG32 for storing relevant information for the vsm (depth & depth^2).
  • Texture array (for each cascade) GL_COLOR_ATTACHMENT1 with format of RG32 for storing result of vertical Gaussian blue
  • Texture array (for each cascade) GL_COLOR_ATTACHMENT2 with format of RG32 for storing result of horizontal Gaussian blue

My render pass is as follows:

  • bind vsm shader
  • set viewport to size of shadow-map size
  • bind the FBO
  • bind the RBO depth attachment to FBO
  • for each cascade bind the relevant texture array layer (the first texture array) to the FBO
  • clear the depth and color buffers
  • draw scene
  • bind vertical blur shader
  • bind the texture array (the first texture array) for sampling
  • for each cascade bind the relevant texture array layer (the second texture array) to the FBO and set
  • clear the color buffer
  • draw full screen quad (uses NDC coordinates so no need to disable depth test)
  • bind horizontal blur shader
  • bind the texture array (the second texture array; the texture array containing vertical blur) for sampling
  • for each cascade bind the relevant texture array layer (the thirdtexture array) to the FBO and set
  • clear the color buffer
  • draw full screen quad (uses NDC coordinates so no need to disable depth test)
  • unbind FBO
  • set viewport to original size

Creating the FBO:

void DirectionalLightShadow::Setup(const ShadowSettings& settings)
{
    m_Settings = settings;
    if (settings.m_FilterMode != ShadowSettings::FilterMode::VSM)
    {
        m_FrameBuffer->AddTexutreArrayAttachment(
            new Texture2DArray(
                Texture2DArray::Texture2DArraySettings
                {
                    SHADOW_MAP_TEXTURE_UNIT, GL_DEPTH_COMPONENT32F, GL_DEPTH_COMPONENT, GL_FLOAT,
                    m_Settings.m_MapSize, m_Settings.m_MapSize, 1,
                    Texture::TextureFilterMode::NEAREST, Texture::TextureFilterMode::NEAREST,
                    Texture::TextureWrapMode::CLAMP_TO_BODER, Texture::TextureWrapMode::CLAMP_TO_BODER, Texture::TextureWrapMode::CLAMP_TO_BODER,
                    Color(1.0f, 1.0f, 1.0f, 1.0f), m_Settings.m_NumCascades
                }
        ), GL_DEPTH_ATTACHMENT, 0);
        m_FrameBuffer->DisableColorBuffer();
    }
    else
    {
        m_FrameBuffer->AddRenderBufferAttachment(new RenderBuffer(1, GL_DEPTH_COMPONENT32F, m_Settings.m_MapSize, m_Settings.m_MapSize), GL_DEPTH_ATTACHMENT);
        m_FrameBuffer->AddTexutreArrayAttachment(
            new Texture2DArray(
                Texture2DArray::Texture2DArraySettings
                {
                    SHADOW_MAP_TEXTURE_UNIT, GL_RG32F, GL_RG, GL_FLOAT,
                    m_Settings.m_MapSize, m_Settings.m_MapSize, 1,
                    Texture::TextureFilterMode::NEAREST, Texture::TextureFilterMode::NEAREST,
                    Texture::TextureWrapMode::CLAMP_TO_BODER, Texture::TextureWrapMode::CLAMP_TO_BODER, Texture::TextureWrapMode::CLAMP_TO_BODER,
                    Color(1.0f, 1.0f, 1.0f, 1.0f), m_Settings.m_NumCascades
                }
        ), GL_COLOR_ATTACHMENT0, 0);
        m_FrameBuffer->AddTexutreArrayAttachment(
            new Texture2DArray(
                Texture2DArray::Texture2DArraySettings
                {
                    SHADOW_MAP_TEXTURE_UNIT, GL_RG32F, GL_RG, GL_FLOAT,
                    m_Settings.m_MapSize, m_Settings.m_MapSize, 1,
                    Texture::TextureFilterMode::NEAREST, Texture::TextureFilterMode::NEAREST,
                    Texture::TextureWrapMode::CLAMP_TO_BODER, Texture::TextureWrapMode::CLAMP_TO_BODER, Texture::TextureWrapMode::CLAMP_TO_BODER,
                    Color(1.0f, 1.0f, 1.0f, 1.0f), m_Settings.m_NumCascades
                }
        ), GL_COLOR_ATTACHMENT0, 0);
        m_FrameBuffer->AddTexutreArrayAttachment(
            new Texture2DArray(
                Texture2DArray::Texture2DArraySettings
                {
                    SHADOW_MAP_TEXTURE_UNIT, GL_RG32F, GL_RG, GL_FLOAT,
                    m_Settings.m_MapSize, m_Settings.m_MapSize, 1,
                    Texture::TextureFilterMode::NEAREST, Texture::TextureFilterMode::NEAREST,
                    Texture::TextureWrapMode::CLAMP_TO_BODER, Texture::TextureWrapMode::CLAMP_TO_BODER, Texture::TextureWrapMode::CLAMP_TO_BODER,
                    Color(1.0f, 1.0f, 1.0f, 1.0f), m_Settings.m_NumCascades
                }
        ), GL_COLOR_ATTACHMENT0, 0);
    }
}
void ShadowMapRendererSystem::Update(std::vector<std::vector<std::reference_wrapper<Entity>>> entities, ECSManager* ecs)
{
    for (auto& lightEntity : entities[0])
    {
        DirectionalLight& light = ecs->GetComponent<DirectionalLight>(lightEntity);
        if (!light.m_CastShadows)
            continue;

        Camera& camera = ecs->GetComponent<Camera>(entities[1][0]);

        auto& shadow = light.m_Shadow;
        Shader& shader = shadow->GetSettings().m_FilterMode == ShadowSettings::FilterMode::PCF3x3 ? m_PCFShader : m_VSMShader;
        shader.Bind();

        RendererCommands::SetViewport(shadow->GetSettings().m_MapSize, shadow->GetSettings().m_MapSize);

        UniformMat4f projectionUniform("u_Projection");
        UniformMat4f viewUniform("u_View");
        UniformMat4f modelUniform("u_Model");

        viewUniform.SetValues(light.GetView());
        shader.SetUniform(viewUniform);

        Uniform1f nearUniform("u_Near");
        Uniform1f farUniform("u_Far");

        nearUniform.SetValues({ camera.GetNear() });
        farUniform.SetValues({ camera.GetFar() });

        if (shadow->GetSettings().m_FilterMode == ShadowSettings::FilterMode::VSM)
        {
            shader.SetUniform(nearUniform);
            shader.SetUniform(farUniform);
            shadow->GetFrameBuffer()->BindRenderBufferAttachmentWrite(0);
        }

        for (size_t i = 0; i < light.m_Shadow->GetSettings().m_NumCascades; i++)
        {
            shadow->GetFrameBuffer()->BindTextureArrayAttachmentWrite(0, i);
            if (shadow->GetSettings().m_FilterMode == ShadowSettings::FilterMode::VSM)
            {
                RendererCommands::Clear(FrameBufferTypes::COLOR | FrameBufferTypes::DEPTH);
            }
            else
            {
                RendererCommands::Clear(FrameBufferTypes::DEPTH);
            }

            projectionUniform.SetValues(light.GetProjection(camera, i));
            shader.SetUniform(projectionUniform);

            for (auto& meshEntity : entities[2])
            {
                MeshRendererComponent& meshRenderer = ecs->GetComponent<MeshRendererComponent>(meshEntity);
                if (meshRenderer.m_CastShadows)
                {
                    Model* model = ecs->GetComponent<MeshFilterComponent>(meshEntity).m_Model;
                    Transform& transform = ecs->GetComponent<Transform>(meshEntity);

                    modelUniform.SetValues(transform.m_ModelMatrix);
                    shader.SetUniform(modelUniform);

                    for (auto& mesh : model->GetMeshes())
                    {
                        mesh.GetVertexArray()->Bind();
                        if (mesh.GetVertexArray()->GetIndexBuffer().GetID() != 0)
                            glDrawElements(GL_TRIANGLES, mesh.GetVertexArray()->GetIndexBuffer().GetCount(), GL_UNSIGNED_INT, nullptr);
                        else
                            glDrawArrays(GL_TRIANGLES, 0, mesh.GetVertexArray()->GetVertexBuffer().GetSize() / mesh.GetVertexArray()->GetVertexBuffer().GetLayout().GetStride());
                    }
                }
            }
        }

        if (shadow->GetSettings().m_FilterMode == ShadowSettings::FilterMode::VSM)
        {
            float quadVertices[] = {
                -1.0f,  1.0f,    0.0f, 1.0f,
                -1.0f, -1.0f,    0.0f, 0.0f,
                 1.0f, -1.0f,    1.0f, 0.0f,

                -1.0f,  1.0f,    0.0f, 1.0f,
                 1.0f, -1.0f,    1.0f, 0.0f,
                 1.0f,  1.0f,    1.0f, 1.0f
            };
            std::shared_ptr<VertexArray> fbQuadVa = std::make_shared<VertexArray>();
            VertexBufferLayout fbQuadVbl;
            fbQuadVbl.Push({ GL_FLOAT, 2, sizeof(float) * 2, GL_FALSE });
            fbQuadVbl.Push({ GL_FLOAT, 2, sizeof(float) * 2, GL_FALSE });
            VertexBuffer fbQuadVb(quadVertices, sizeof(quadVertices), fbQuadVbl);
            fbQuadVa->SetVertexBuffer(std::move(fbQuadVb));
            std::shared_ptr<Material> fbQuadMat = std::make_shared<Material>();
            Mesh fbQuadMesh(fbQuadVa, fbQuadMat);

            fbQuadMesh.GetVertexArray()->Bind();

            m_GuassianBlurVShader.Bind();

            light.m_Shadow->GetFrameBuffer()->BindTextureAttachmentRead(0);

            Uniform1i textureUniform("u_Texture");
            textureUniform.SetValues({ (int)light.m_Shadow->GetFrameBuffer()->GetTextureAttachments()[0].second->m_Unit });
            m_GuassianBlurVShader.SetUniform(textureUniform);

            Uniform1f sigmaUniform("u_Sigma");
            sigmaUniform.SetValues({ 1.0f });
            m_GuassianBlurVShader.SetUniform(sigmaUniform);

            Uniform1f muUniform("u_Mu");
            muUniform.SetValues({ 0.0f });
            m_GuassianBlurVShader.SetUniform(muUniform);

            for (size_t i = 0; i < light.m_Shadow->GetSettings().m_NumCascades; i++)
            {
                shadow->GetFrameBuffer()->BindTextureArrayAttachmentWrite(1, i);
                RendererCommands::Clear(FrameBufferTypes::COLOR);

                Uniform1i depthUniform("u_Depth");
                depthUniform.SetValues({ (int)i });
                m_GuassianBlurVShader.SetUniform(depthUniform);

                glDrawArrays(GL_TRIANGLES, 0, fbQuadMesh.GetVertexArray()->GetVertexBuffer().GetSize() / fbQuadMesh.GetVertexArray()->GetVertexBuffer().GetLayout().GetStride());
            }

            m_GuassianBlurHShader.Bind();

            light.m_Shadow->GetFrameBuffer()->BindTextureAttachmentRead(1);

            textureUniform.SetValues({ (int)light.m_Shadow->GetFrameBuffer()->GetTextureAttachments()[1].second->m_Unit });
            m_GuassianBlurHShader.SetUniform(textureUniform);

            m_GuassianBlurHShader.SetUniform(sigmaUniform);

            m_GuassianBlurHShader.SetUniform(muUniform);

            for (size_t i = 0; i < light.m_Shadow->GetSettings().m_NumCascades; i++)
            {
                shadow->GetFrameBuffer()->BindTextureArrayAttachmentWrite(2, i);
                RendererCommands::Clear(FrameBufferTypes::COLOR);

                Uniform1i depthUniform("u_Depth");
                depthUniform.SetValues({ (int)i });
                m_GuassianBlurHShader.SetUniform(depthUniform);

                glDrawArrays(GL_TRIANGLES, 0, fbQuadMesh.GetVertexArray()->GetVertexBuffer().GetSize() / fbQuadMesh.GetVertexArray()->GetVertexBuffer().GetLayout().GetStride());
            }
        }

        shadow->GetFrameBuffer()->Unbind();
    }

    RendererCommands::SetViewport(Screen::GetWidth(), Screen::GetHeight());
}

Gaussian blur shader:

#version 330 core

out vec4 color;

in VS_OUT
{
    vec2 v_TexCoord;
} fs_in;

uniform sampler2DArray u_Texture;
uniform int u_Detph;
uniform float u_Sigma;
uniform float u_Mu;

#define K 1

#define PI 3.1415926535897932384626433832795

float gaussianDistribution(int x, float mu, float sigma)
{
    float d = x - mu;
    return exp(-d * d / (2 * sigma * sigma)) / (sqrt(2 * PI) * sigma);
};

void main()
{
    const int rows = 2 * K + 1;
    float kernel[rows];

    float sum = 0.0f;

    int count = rows / 2 + 1;
    for (int i = 0; i < count; i++)
    {
        float distribution = gaussianDistribution(count - i - 1, u_Mu, u_Sigma);
        kernel[i] = distribution;
        if (i != count - 1)
        {
            kernel[rows - (i + 1)] = distribution;
            sum += distribution + distribution;
        }
        else
        {
            sum += distribution;
        }
    }

    for (int i = 0; i < count; i++)
    {
        kernel[i] /= sum;
        if (i != count - 1)
        {
            kernel[rows - (i + 1)] = kernel[i];
        }
    }

    float offset = textureSize(u_Texture, 0).y;
    vec2 offsets[] = vec2[](
        vec2(0.0f, -offset), // top
        vec2(0.0f, 0.0f),    // center
        vec2(0.0f, offset)  // bottom
    );

    vec3 sampleTex[rows];
    for (int i = 0; i < rows; i++)
        sampleTex[i] = vec3(texture(u_Texture, vec3(fs_in.v_TexCoord.st + offsets[i], u_Detph)));

    vec3 col = vec3(0.0);
    for (int i = 0; i < rows; i++)
        col += sampleTex[i] * kernel[i];

    //color = vec4(col, 1.0);
    color = vec4(1.0, 1.0, 0.0, 1.0);
}

Display debug texture shader:

#version 330 core

struct Material
{
    sampler2DArray diffuse;
};

out vec4 color;

in VS_OUT
{
    vec2 v_TexCoord;
} fs_in;

uniform Material u_Material;
![](upload://hDkdfZJmCReuia6p34hZLQMkZu1.jpeg)
void main()
{
    color = vec4(vec3(texture(u_Material.diffuse, vec3(fs_in.v_TexCoord, 0)).r), 1.0f);
}

Now if I output the result of the first sub-pass when I draw the scene into a debug texture, I can see the result are correct the red channel shows the correct depth values and green channel shows the correct depth^2 values:

But for some reason when I output the result of the second/third sub-pass I see wrong output (not blurred and seems like wrong cascade view) texture:

I even tried setting the fragment shader of the blur to just outColor = vec4(1.0, 0.0, 0.0, 1.0)
to debug it but it still displaying wrong result and I’m sure the code for picking which texture array to display a debug texture for is correct, I just change an index for doing so.
So I’m not sure what’s wrong here but something isn’t working.

Would appreciate help spotting the culprit code :pray:

So disabling depth test before the gaussian blur sub-pass and enabling it back solves the problem of the incorrect blur texture result. The image taken of the blur texture is the view from the last cascade. I’m not sure why I have to do this since the texture quad is in NDC with 0.0f as its z value, but I just disabled depth test before blurring (could have also cleared the depth buffer after the main loop I guess).

After doing that I wanted to check if the blurring is correct, but it seems that it only whiten the image a bit after the vertical blur pass and whiten it even more after the horizontal pass, so I’m not sure why yet but I guess it has something to do with the blur shader, any help would be appreciated. A gif showing the debug texture of the depth values then depth values vertically blurred and then horizontally blurred (showing only the red channel; depth values not depth^2 values):

One thing I notice is that the offsets are wrong; they should be 1/size (for a 1-texel offset). But you should probably be using texelFetch here.

Also, I’d suggest calculating the kernel in the application and storing it in a uniform array or 1D texture. There’s no reason to recalculate it for every fragment. That also makes it easier to check that the weights sum to one.

Thanks for spotting this, I wouldn’t know what was the problem unless you told me since I’m sitting on this for days and needed someone from the outside to look at at! :slight_smile:
Also thanks for the tip for using texelFetch. I’ll leave that as texture for now because I wish to keep the required opengl version to minimum when I try to write something and I use nearest filter with only offsets of multiplies of a texel size so that should be fine for now.

For your second suggestion that would solve my problem that K right now is a constant defined and I can’t use a uniform to bring it in since I can’t create dynamic sized array in glsl. But if I calculate the kernel on my cpu and send it over than the “blur shader” or any other filter shader for that purpose would just become a standard “kernel filter shader” where the logic of the filter sits on the cpu and only the multiplication is done in the shader. I don’t really like that approach because if someone would take a look at a code where he uses for example shader with the name “gaussian_blur.shader” he would immediately understand what I’m doing as opposed to using “filter.shader” and then he would need to read what calculation I’m doing to get the kernel and only then understand. What are your thoughts on this subject?

Thanks in advance!
btw forgot to mention now it blurs correctly :slight_smile:

If you consider it essential that the shader does Gaussian blur and nothing else, I’d suggest writing a program to which takes in a generic “convolution filter” template, fills in a “const float kernel[] = {...};” with values and writes out the specialised shader (e.g. “gaussian_3x1.glsl”).

Okay thanks for the suggestion!
Will implement it eventually this way!

Also if we’re on the topic of VSM in general maybe you could clear out something for me.
For VSM to work wee need E[x] and E[x^2] which we supposedly get after filtering the image for example by blurring the texture.
But isn’t using Gaussian filter kernel to filter the texture, won’t result in E[x] and E[x^2] since it’s a distributed weighted average and not a normal average?

A weighted average just means that the probabilities used in the calculation of E[] aren’t uniform. E[x]=sum(x*P(x)). Applying a Gaussian blur means that values closer to the pixel in question are more likely than those farther away. it’s equivalent to averaging a set of samples where the sampling is more dense closer to the centre.

So long as you use the same kernel for blurring both depth and depth^2, the formula for the variance is still applicable.

1 Like

Okay I’m not sure what to expect but even after 21 kernel size of blue the shadows have the blocky artifact just like PCF up-close. Should I expect this or something might be wrong?

My VSM implementation is derived from here up until the point (including) light bleeding reduction.

Is that just the pixels in the shadow map? Are you sampling the (processed) shadow map with linear or nearest filtering?

Yea I was sampling with nearest and switched to linear. Not sure I understand how this works, shouldn’t linear sampling affect the chebyshev’s inequality so that instead of
getting the probability of a fragment in the area having larger depth value than the inspected fragment
to
getting the probability of a fragment in the area having larger depth value than the inspected fragment(s) linear interpolated depths

What’s the meaning of it?

It depends upon whether you’re talking about using linear filtering for the Gaussian blur or linear filtering for sampling the blurred texture. The blur filter should be using nearest-neighbour (or texelFetch). The final render should be using linear filtering.

Whatever you do, the end result is going to be an approximation.

Yea I’m talking about linear filtering for sampling the blurred texture. Can you explain it. On the other note. I’ve tried to change to texelFetch from texture in the blur shaders as your suggestions and I couldn’t get it to work. Basically tried to change this:

sampleTex[i] = vec3(texture(u_Texture, vec3(fs_in.v_TexCoord.st + vec2(0.0, (i + 1 - count) * offset), u_Depth)));

to this:

sampleTex[i] = vec3(texelFetch(u_Texture, vec3((fs_in.v_TexCoord.st + vec2(0.0, (i + 1 - count) * offset) * vec2(textureSize(u_Texture, 0).y - 1)), u_Depth)));

I figured that it’s better to use texelFetch because if I would manage to do it I could get rid of the 2 extra texture array used for horizontal blur and vertical blur and only use one simple texture that I pingpong between the shadow map and this texture to create final gaussian blur. Without texelFetch I would have to leave the first texture filtering as linear (for when sampling the shadow map to find the shadow factor) thus making the blur “not correct” as it will not use nearest filtering.

What’s to explain? Linear filtering will produce a closer approximation to the desired value than nearest-neighbour filtering.

As for texelFetch: you shouldn’t be dividing by the size then multiplying again. For a “post-process” type filter, you don’t need texture coordinates; you can just use gl_FragCoord.xy, which is in pixels. So e.g.

vec4 sample = texelFetch(u_Texture, ivec3(gl_FragCoord.x, gl_FragCoord.y + (i + 1 - count), u_Depth));

What I don’t understand is with nearest filtering chebyshev’s inequality basically saying what’s the probability of a fragment in the area having larger depth value than the inspected fragment
but with linear filtering it becomes what’s the probability of a fragment in the “interpolated area” having larger depth value than the inspected fragment. It becomes not very intuitive in my mind that linear interpolation will return interpolated E[x] and E[x^2] values and how that would work with chebyshev’s inequality.

About texelFetch I’ve tried to make it general in the sense that if it is not only used for post-processing than I couldn’t use gl_FragCoord as the UVs so that’s why I tried to take the interpolated UVs and multiply them by the texture size to find the unnormalized UVs for texelFetch. How can I do this then?