I am trying to implement skybox rendering in a software renderer. I am reading the spec for OpenGL 4.6 Core Profile and got a question regarding the explanation provided there. Here is the section in question
8.13 Cube Map Texture Selection
When a cube map texture is sampled, the (s, t, r) texture coordinates are treated as a direction vector (rx, ry, rz) emanating from the center of a cube. The q coordinate is ignored. At texture application time, the interpolated per-fragment direction vector selects one of the cube map face’s two-dimensional images based on the largest magnitude coordinate direction (the major axis direction). The target column in table 8.19 explains how the major axis direction maps to the two-dimensional image of a particular cube map target.
Using the sc, tc, and ma determined by the major axis direction as specified in table 8.19, an updated (s, t) is calculated as follows:
s = 0.5 * (\frac {s_c}{|ma|} + 1) \\
t = 0.5 * (\frac {t_c}{|ma|} + 1)
Note: i havent copied the table (8.19) in this post but it helps determine sc, tc, ma.
Question #1: Am I understanding correctly that I should use the interpolated position vector (world space) of the cube faces in the calculation for s and t ?
Question #2: The spec says
The q coordinate is ignored.
But there isnt any other mention of the q coordinate in that section. What is this about?
If you’re rendering a skybox, one approach is to draw a cube whose vertices have 3D texture coordinates which are just the object-space positional coordinates: <±1,±1,±1>. Another approach is to render a full-screen quad whose corners have texture coordinates which are obtained by transforming the vertices of the view frustum by the inverse of the model-view-projection matrix.
The latter approach is slightly more complex but may be more efficient and avoids potential issues with the skybox being clipped by the far plane.
When it comes to implementing a texture map lookup for a cube map, you just use the s,t,r coordinates without caring how they originated.
That’s because it’s ignored.
Texture lookups for other types of texture map (1D, 2D, 3D) divide the s,t,r coordinates by the q coordinate (analogous to dividing x,y,z by w for positional coordinates). But this isn’t necessary for a cube map because the calculation of the 2D coordinates within a face divides by the major axis coordinate, so division by q would be redundant: (s/q)/(r/q)=s/r and (t/q)/(r/q)=t/r.
When it comes to implementing a texture map lookup for a cube map, you just use the s,t,r coordinates without caring how they originated.
Im not sure i get this, since im trying to reimplement the spec in my software renderer, i do care how it works under the hood. Why do you think I shouldnt?
If you’re rendering a skybox, one approach is to draw a cube whose vertices have 3D texture coordinates which are just the object-space positional coordinates: <±1,±1,±1>. Another approach is to render a full-screen quad whose corners have texture coordinates which are obtained by transforming the vertices of the view frustum by the inverse of the model-view-projection matrix.
If i understand this correctly, it refers to how i would go about doing the skybox using OpenGL. Am i correct ?
exture lookups for other types of texture map (1D, 2D, 3D) divide the s,t,r coordinates by the q coordinate (analogous to dividing x,y,z by w for positional coordinates). But this isn’t necessary for a cube map because the calculation of the 2D coordinates within a face divides by the major axis coordinate, so division by q would be redundant: (s/q)/(r/q)=s/r and (t/q)/(r/q)=t/r.
Ahh, that makes sense, nothing to worry about then. ty!
How they originate is part of the rasterisation process, and doesn’t depend upon what types of texture maps are being used. By the time you get to texture lookup, you already have the texture coordinates.
For a software renderer, it’s more efficient to render a skybox as a cube using six distinct 2D textures (rather than a cube map). Using a cube map implies that you’re selecting the face (according to which of the three components has the largest magnitude) on a per-fragment basis.
Thank you for spending time on this and it is very helpful as it allows me to pinpoint what i want better!
By the time you get to texture lookup, you already have the texture coordinates.
This sentence makes me think that you think that i am using / wanting to use OpenGL to achieve what i asked about. This is not the case, I am purely trying to re-implement the spec in my own code. I have full control over the rasterization code and everything else.
I am not trying to use OpenGL, i am trying to understand how the implementors of the spec are doing it so i can replicate it myself in my C code.
For a software renderer, it’s more efficient to render a skybox as a cube using six distinct 2D textures (rather than a cube map). Using a cube map implies that you’re selecting the face (according to which of the three components has the largest magnitude) on a per-fragment basis.
This is exactly what i am trying to do, perhaps the language i used was confusing. Using the regular texture coords of a cube to sample the texture corresponding to each major axis does not produce the expected result.
Just to go back to the original ask, are the (s, t, r) coords passed to the sampler just the interpolated position vectors of the unit cube?
I think i made some progress on this. Interpolating the position and using it as the input to the texture coord calculation provided in the spec does seem to be the right way. Yesterday it didnt look like it as I have a bug in my code but i think this question can be closed now.
Which spec? The OpenGL spec (at least, for version 2 and later) pretty much requires that the stages are separate. Because shaders.
They’re the interpolated texture coordinates. For the specific case of drawing a skybox, it’s fairly typical for the texture coordinates to be equal to the object-space positional coordinates. But in general, texture coordinates and positional coordinates are entirely different things.