just to understand that it was only the convertion from regular space coord system to tangent space.
from that i suppose that TU1 contains the sTangent , TU2 contains tTangent and TU3 contains the normal.
what i didn’t get from humus’ code and framework is : how did he put it in texture coordinate ?
what i’d do :
set TU1, 2, 3 as 3D textures , and feed arrays with s & t tangent and normal vector
is that it ? or am out wrong ?
thanks in advance for any answer
wizzo
I think you may be under the mistaken assumption that you have to set a texture for the texture stage in order to access the texture coordinates.
This is not so, you can pass what ever data you like in texture coordinates (or generic attributes) and not have to bind a texture to that stage. Whatever you specify is passed directly into the vertex shader.
are sending Tangent and binormal to the shader (0xa12018 and 0xa12024 are the arrays i was talking about “feed arrays with s & t tangent and normal vector”)
and yeah when i said set TU1, 2, 3 as 3D textures, i didn’t meant binding textures, just sending 3 component vectors in arrays =)
To Humus : no, by that i was just asking confirmation that the arrays send via texCoord were binormal and tangent informations.
sorry if i got people misunderstood, i realise now that i wasn’t particularly clear =)
thx again for your reply sqrt
another question of taste perhaps, recently (yesterday) i switched over my shader code to read both the vertex + fragment shaders from the same file, whereas before i had two files (one for the fragment shader + another for the vertex shader), now everywhere else ive seen glsl shaders (except humus) split them up into 2 files, but since vertex + fragment shaders are pretty closly coupled it makes sense to keep them in the same file, doesnt it? are there any good reasons to split up the two shaders
I think it makes a lot of sense to use the same file for both. Very convenient for editing. The only time it would be beneficial to use two files would perheps be if you plan on sharing a shader object between different program objects.
I don’t see matrix multiplication becoming a native instruction anymore than I see for instance the pow() function becoming a native instruction. Actually, making pow a native instruction would be easier than making matrix multiplication a native instruction. Hardware is built around a vec4 paradigm, and making matrix multiplication into a single operation would sort of break that paradigm. If more processing capabilities are added to a pipeline, you would likely rather want it to be usable for processing several instructions than for adding more complex instructions.