I’ve been scanning the specifications for both OpenGL and GLSL, and I can’t find a definitive answer as to how exactly are gl_FragCoord
’s x
and y
components computed, other than they are the
fragment’s window-space position
(OpenGL 4.6 specification, page 491)
which does not help me understand why glViewport()
doesn’t seem to affect the values of gl_FragCoord
in the fragment stage, since that is the case for window-space coordinates in the vertex stage.
The actual problem I’m trying to solve is rendering a bunch of draws (that depend on gl_FragCoord
for their shading) to an atlas, without modifying the shaders of the draws to be aware of the atlas. Here’s pseudocode for the driving part (all viewports have identical dimensions, just different offsets):
for (size_t i = 0; i < FrameCount; ++i)
{
auto& Vp = Viewports[i];
glViewport(Vp.X, Vp.Y, Vp.X + Vp.Width, Vp.Y + Vp.Height);
BindConstants(Vp.Width, Vp.Height);
glDrawArrays(GL_TRIANGLES, 0, Triangles);
}
BindConstants()
simply binds a uniform with the viewport’s width & height. When trying to simply output the UVs with the following shader:
#version 330
uniform vec2 Resolution;
void main(void)
{
vec2 UV = gl_FragCoord.xy / Resolution;
gl_FragColor = vec4(UV, 0.0, 1.0);
}
I get the following image, which clearly shows that colour is being clipped to 1.0 beyond the first viewport in a 5x5 atlas:
I’ve also done additional simple tests that show that gl_FragCoord
is based off of the entire frame buffer’s origin, rather than what’s specified via glViewport()
.
I have worked around this issue by rendering to a temporary FBO and blitting to the target frame buffer at the desired offset, but it’s silly to go to such lengths for a dead-simple use-case like this.
Am I missing something? Is it possible at all without a temporary FBO, and, above all, without modifying the shaders?