gl_Fragcoord.z not good

Hi, I’ve writte a shader which write z fragment to a texture because if I attach a depthtexture to my FBO it doesn’t work :


const std::string  vertexShader =
                "#version 130 
"
                "out mat4 projMat;"
                "void main () {"
                    "gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;"
                    "gl_TexCoord[0] = gl_TextureMatrix[0] * gl_MultiTexCoord0;"
                    "gl_FrontColor = gl_Color;"
                    "projMat = gl_ProjectionMatrix;"
                "}";

const std::string depthGenFragShader =
"#version 130
"
“in mat4 projMat;”
“uniform sampler2D texture;”
“uniform float haveTexture;”
“void main () {”
“vec4 texel = texture2D(texture, gl_TexCoord[0].xy);”
“vec4 colors[2];”
“colors[1] = texel * gl_Color;”
“colors[0] = gl_Color;”
“bool b = (haveTexture > 0.9);”
“vec4 color = colors[int(b)];”
“float z = (gl_FragCoord.w != 1.f) ? (inverse(projMat) * vec4(0, 0, 0, gl_FragCoord.w)).w : gl_FragCoord.z;”
“gl_FragColor = vec4(0, 0, z, color.a);”
“}”;



I've written another shader that discard closest fragments. 


const std::string frameBufferGenFragShader =
"#version 130
"
“uniform sampler2D depthBuffer;”
“uniform sampler2D texture;”
“uniform vec3 resolution;”
“uniform float haveTexture;”
“in mat4 projMat;”
“void main () {”
“vec2 position = ( gl_FragCoord.xy / resolution.xy );”
“float max_z = texture2D(depthBuffer, position).z;”
“vec4 texel = texture2D(texture, gl_TexCoord[0].xy);”
“vec4 colors[2];”
“colors[1] = texel * gl_Color;”
“colors[0] = gl_Color;”
“bool b = (haveTexture > 0.9);”
“vec4 color = colors[int(b)];”
“float z = (gl_FragCoord.w != 1.f) ? (inverse(projMat) * vec4(0, 0, 0, gl_FragCoord.w)).w : gl_FragCoord.z;”
“colors[1] = color;”
“colors[0] = vec4(0, 0, 0, 0);”
“b = (z < max_z);”
“gl_FragColor = colors[int(b)];”
“}”;



But it's not good some closest fragments are still written to the texture. What's gl_FragCoord.z, the z in window space (between 0 ans 1) or the result of the multiplication of the interpolated vertex by the modelViewProjectionMatrix ?

And it's seems there isn't enought space to write z in a GLRGBA8 text.

But I use exactly the same technique to test if a light fragment is behind an object or not and it works.

OK, this has nothing to do with your problem, but FYI:

Any recent version of C++ has a really useful language feature for sticking multi-line strings into your code. They’re called “raw string literals”. You really should use them for inline shaders and the like.

All those spurious quotation marks makes it hard to read your actual shader text.

Normally I should get something like this for the different textures :
[ATTACH=CONFIG]1939[/ATTACH]

Pass 1 : I simply draw nearest sprites with a depth test. => ok.
Pass 2 : if the z >= max_z it means that the fragment is on the top layer so I set the color to transparent so the fragment doesn’t pass the alpha test, he’s ignored (not written in the front buffer and on the depthtexture and opengl ignore the depthtest for this fragment) so it should gets me the layer two.
I needed to add 0.002 to z because if I compare the z of the sprite with the z of the sprite itself (on the depth texture) it pass. so there is a precision problem of 0.02!


"colors[1] = color;"
                    "colors[0] = vec4(0, 0, 0, 0);"
                    "b = ((z + 0.002) < max_z);"
                    "gl_FragColor = colors[int(b)];"

I update the front buffer and the depth buffer and I perform alpha blending :


frontBuffer.clear(sf::Color::Transparent);
                for (unsigned int i = 0; i < m_instances.size(); i++) {
                     if (m_instances[i].getMaterial().getTexture() == nullptr) {
                        frameBufferGenerator.setParameter("haveTexture", 0);
                     } else {
                        frameBufferGenerator.setParameter("haveTexture", 1);
                     }
                     currentStates.texture=m_instances[i].getMaterial().getTexture();
                     frontBuffer.draw(m_instances[i].getAllVertices(), currentStates);
                }
                frameBufferGenerator.setParameter("haveTexture", 1);
                for (unsigned int i = 0; i < m_instances.size(); i++) {
                    currentStates.texture = &depthBuffer.getTexture();
                    depthBuffer.draw(m_instances[i].getAllVertices(),currentStates);
                }
                depthBuffer.display();
                frontBuffer.display();
                frontBufferSprite.setCenter(view.getPosition());
                currentStates.shader = nullptr;/*&simpleShader;
                simpleShader.setParameter("haveTexture", 1);*/
                currentStates.blendMode = sf::BlendMode(sf::BlendMode::Factor::OneMinusDstAlpha, sf::BlendMode::Factor::DstAlpha, sf::BlendMode::Add, sf::BlendMode::Factor::OneMinusDstAlpha, sf::BlendMode::Factor::One, sf::BlendMode::Equation::Add);
                frameBuffer.draw(frontBufferSprite, currentStates);

I don’t perform a clear for the depthTexture otherwise I need to copy the depthTexture it’s not performant I simply replace the pixels with no blending.

I do the same for the last layer but to have the last layer, I needed to add 0.08 to z this time!

[code=cpp]
“b = ((z + 0.008) < max_z);”


So if the sprite is farest I've a grater precision problem! (The precision problem is different in function of the layer position so I really don't know how to compute this, more the sprite is far, more the precision problem increase, I'm in orthographic projection and the z space between my sprites is the same :

Tile* t1 = new Tile(nullptr,Vec3f(0, 0, 0), Vec3f(100, 50, 0),sf::IntRect(0, 0, 100, 50),sf::Color(255, 0, 0,50));
Tile* t2 = new Tile(nullptr,Vec3f(50, 0, 10),Vec3f(100, 50, 0),sf::IntRect(0, 0, 100, 50),sf::Color(0,0,255,50));
Tile* t3 = new Tile(nullptr,Vec3f(25, 25, 20),Vec3f(100, 50, 0),sf::IntRect(0, 0, 100, 50),sf::Color(0, 255, 0, 50));



Why I've two different z precision problems in orthographic projection ? And Why is z so imprecise ? And how to compute this imprecision factor depending on the z layer position ?

PS : This technique is slow when the number of sprites is hight because there are more layers, so more passes.
But this is the only way to make order independant transparency works.
3D texture'll consume too much memory and we cannot specify in which layer to write in the fragment shader we need to made it in the CPU so if the z position of the fragments of a single object are not the same it'll not work.
And we cannot share memory between differents fragment shader execution, so, I canno't write the fragment color and position to a container to sort them and then, redraw them in the right order.

Mmm…, it seems that gl_FragCoord.z interval change depending on the z range between the nearest and the farest fragment.

If there are tree fragments per example the z values are 0.25, 0.5, 0.75.
But where there are only two fragments (I discard the z = 0.75), it seems that the z values of the two last fragments becomes greater than 0.25 and than 0.5.

How can I tell to opengl to don’t do that ? Or is there a way to convert non linear z values to linear z values ?

PS : or is it because sometimes I multiply by the inverse projection matrix and sometimes not ?
But if I doesn’t do that and if I simply write gl_FragCoord.z, it doesn’t work.

No the condition is always false here :


"float epsilons[2];"
                    "epsilons[0] = 0.005;"
                    "epsilons[1] = 0.01;"
                    "bool b2 = (gl_FragCoord.w != 1.f);"
                    "b = (z + epsilons[int(b2)] < max_z);"

So it’s opengl which change gl_FragCoord.z when there are less fragments drawn at the screen. This is really a problem normally gl_FragCoord.z should stay the same if the near and far plane are not changed.

Ok I solved the problem by doing this :


"#version 130 
"
                "uniform sampler2D depthBuffer;"
                "uniform sampler2D texture;"
                "uniform vec3 resolution;"
                "uniform float haveTexture;"
                "in mat4 projMat;"
                "uniform float numLayer;"
                "void main () {"
                    "vec2 position = ( gl_FragCoord.xy / resolution.xy );"
                    "float max_z = texture2D(depthBuffer, position).z;"
                    "vec4 texel =  texture2D(texture, gl_TexCoord[0].xy);"
                    "vec4 colors[2];"
                    "colors[1] = texel * gl_Color;"
                    "colors[0] = gl_Color;"
                    "bool b = (haveTexture > 0.9);"
                    "vec4 color = colors[int(b)];"
                    "float z = (gl_FragCoord.w != 1.f) ? (inverse(projMat) * vec4(0, 0, 0, gl_FragCoord.w)).w : gl_FragCoord.z;"
                    "colors[1] = color;"
                    "colors[0] = vec4(0, 0, 0, 0);"
                    "float epsilon = 0.005 * numLayer;"
                    "b = (z + epsilon < max_z);"
                    "gl_FragColor = colors[int(b)];"
                "}";

Passing the layer number to the shader as uniform and multiplying the epsilon by this value.

I give up with this idea, opengl is full bug :


 currentStates.shader = &frameBufferGenerator;
                    currentStates.blendMode = sf::BlendNone;
                    frontBuffer.clear(sf::Color::Transparent);
                    frameBufferGenerator.setParameter("transparent", 1.f);
                    for (unsigned int i = 0; i < m_instances.size(); i++) {
                         if (m_instances[i].getMaterial().getTexture() == nullptr) {
                            frameBufferGenerator.setParameter("haveTexture", 0);
                         } else {
                            frameBufferGenerator.setParameter("haveTexture", 1);
                         }
                         currentStates.texture=m_instances[i].getMaterial().getTexture();
                         frontBuffer.draw(m_instances[i].getAllVertices(), currentStates);
                    }
                    frameBufferGenerator.setParameter("transparent", 0.f);
                    frameBufferGenerator.setParameter("haveTexture", 1);
                    currentStates.texture = &depthBuffer.getTexture();
                    depthBuffer.setActive();
                    glCheck(glClear(GL_DEPTH_BUFFER_BIT));
                    //glCheck(glDisable(GL_ALPHA_TEST));
                    for (unsigned int i = 0; i < m_instances.size(); i++) {
                        depthBuffer.draw(m_instances[i].getAllVertices(),currentStates);
                    }
                    //glCheck(glEnable(GL_ALPHA_TEST));
                    //Problem, depthTexture is emplty!!!
                    frontBuffer.display();
                    frontBufferSprite.setCenter(view.getPosition());
                    currentStates.shader = nullptr;/*&simpleShader;
                    simpleShader.setParameter("haveTexture", 1);*/
                    currentStates.blendMode = sf::BlendMode(sf::BlendMode::Factor::OneMinusDstAlpha, sf::BlendMode::Factor::DstAlpha, sf::BlendMode::Add, sf::BlendMode::Factor::OneMinusDstAlpha, sf::BlendMode::Factor::One, sf::BlendMode::Equation::Add);
                    frameBuffer.draw(frontBufferSprite, currentStates);

Logically it should display the same result but with blue sprites for the depthtextures, because z and max_z are the same in the two passes here :


const std::string frameBufferGenFragShader =
                R"(#version 130
                uniform sampler2D texture;
                uniform float haveTexture;
                uniform vec3 resolution;
                uniform sampler2D depthBuffer;
                uniform float transparent;
                in mat4 projMat;
                void main () {
                    vec2 position = ( gl_FragCoord.xy / resolution.xy );
                    float max_z = texture2D(depthBuffer, position).z;
                    vec4 texel =  texture2D(texture, gl_TexCoord[0].xy);
                    vec4 colors[2];
                    colors[1] = texel * gl_Color;
                    colors[0] = gl_Color;
                    bool b = (haveTexture > 0.9);
                    vec4 color = colors[int(b)];
                    float z = (gl_FragCoord.w != 1.f) ? (inverse(projMat) * vec4(0, 0, 0, gl_FragCoord.w)).w : gl_FragCoord.z;
                    colors[1] = color;
                    colors[0] = (transparent > 0.9f) ? vec4(0, 0, 0, 0) : vec4(0, 0, 0, 0.1);
                    float epsilon = 0.005;
                    b = (z + epsilon < max_z);
                    gl_FragColor = colors[int(b)];
                })";

I have the next layer on the front buffer, but, I don’t have the next layer on the depht texture I have an empty texture instead.

OpenGL is really shit, full bug the shaders.

I won’t deny that there are a lot of bugs in OpenGL implementations, but it is not at all clear that you are encountering any such bugs. It’s difficult to help you in solving your problem because:

1: It’s hard to tell what you are attempting to do. Take this thread. You initially say “I’ve written another shader that discard closest fragments.” But no fragments are being discarded. You merely select between two colors. Maybe you’re doing some blending to make the zero color completely transparent and thus not affect the colors behind it. But that fragment is still being rendered, and it will still update the depth buffer.

2: It’s hard to read your OpenGL code because you have so thoroughly abstracted OpenGL with your own API that nobody who is unfamiliar with your code can know what it is doing. What is frontBuffer? What is a frameBufferGenerator? And so forth; only you know what these data types mean.

My overall guess is that, given how all of your problems are tied into the framebuffer yet are also related to stuff you’re sampling from textures, you are probably trying to read from a texture that is attached to the current framebuffer you’re writing to. Which generally speaking yields undefined behavior.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.