Would like some explanation on existing shader

hi all
i’m coming again with Humus’ phong shading shader :

i got really upset by this at the beginning (from portal demo)

vec3 lVec = lightPos - gl_Vertex.xyz;
lightVec.x = dot(gl_MultiTexCoord1.xyz, lVec); lightVec.y = dot(gl_MultiTexCoord2.xyz, lVec); lightVec.z = dot(gl_MultiTexCoord3.xyz, lVec);

just to understand that it was only the convertion from regular space coord system to tangent space.
from that i suppose that TU1 contains the sTangent , TU2 contains tTangent and TU3 contains the normal.

what i didn’t get from humus’ code and framework is : how did he put it in texture coordinate ?
what i’d do :
set TU1, 2, 3 as 3D textures , and feed arrays with s & t tangent and normal vector

is that it ? or am out wrong ?
thanks in advance for any answer

What do you mean “how did he put it in the texture coordinates”? :confused: glTexCoordPointer()? It that what you mean?

I think you may be under the mistaken assumption that you have to set a texture for the texture stage in order to access the texture coordinates.

This is not so, you can pass what ever data you like in texture coordinates (or generic attributes) and not have to bind a texture to that stage. Whatever you specify is passed directly into the vertex shader.

FYI: a “log” of the humus protal demo can be viewed here: http://glintercept.nutty.org/Demo03/gliInterceptLog.xml

so these lines from your log :


are sending Tangent and binormal to the shader (0xa12018 and 0xa12024 are the arrays i was talking about “feed arrays with s & t tangent and normal vector”)
and yeah when i said set TU1, 2, 3 as 3D textures, i didn’t meant binding textures, just sending 3 component vectors in arrays =)

To Humus : no, by that i was just asking confirmation that the arrays send via texCoord were binormal and tangent informations.

sorry if i got people misunderstood, i realise now that i wasn’t particularly clear =)
thx again for your reply sqrt


Yup. TX0 = texCoord, TC1 = tangent, TC2 = binormal, TC3 = normal.

what are the advantages of sending the data this way (as 3 vectors) than as a single mat3x3?

None. It’s basically the same thing. It’s more a question of taste.

another question of taste perhaps, recently (yesterday) i switched over my shader code to read both the vertex + fragment shaders from the same file, whereas before i had two files (one for the fragment shader + another for the vertex shader), now everywhere else ive seen glsl shaders (except humus) split them up into 2 files, but since vertex + fragment shaders are pretty closly coupled it makes sense to keep them in the same file, doesnt it? are there any good reasons to split up the two shaders

I think it makes a lot of sense to use the same file for both. Very convenient for editing. The only time it would be beneficial to use two files would perheps be if you plan on sharing a shader object between different program objects.

If you put it in a mat3x3, I guess you would be able to use matrix multiplication to transform your vectors to tangent space.

It would still be three dot products under the hood so it’s no difference really.

You don’t think matrix multiplication will become a native instruction?

I’m not sure how this could be coded today.
The GLSL document says the input tex coords are not considered to be in an array.

I don’t see matrix multiplication becoming a native instruction anymore than I see for instance the pow() function becoming a native instruction. Actually, making pow a native instruction would be easier than making matrix multiplication a native instruction. Hardware is built around a vec4 paradigm, and making matrix multiplication into a single operation would sort of break that paradigm. If more processing capabilities are added to a pipeline, you would likely rather want it to be usable for processing several instructions than for adding more complex instructions.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.