I wondered a while ago about “world” space normals. Everyone said me that “object” space normals could be interesting, but that world did not present any improvment.
At this point, my whole scenes are lit by a diffuse cube map. This cube map has of course to be considered in world space. To correctly lit a bumped mesh with this diffuse (works with reflection too - EMBM), I have to express the cube map and the normal in the same coordinate space.
It could be either world, but I would have to compute all normals, tangents and binormals in world space in the vertex program. Or I could leave them in object space, and then in the fragment program sample the normal from the bump map, transform it into object space, and then into world space.
Having object space normals would help a lot : after being fetched from the map, it just need to be transformed to world space. No more tangent buffer and binormal computation, that can easily make the transfer stage becoming the bottleneck.
World space normals would further save 3 operations by pixel for all the static geometry. This can become considerable when ALL your static geometry is bumped.
Any thoughts ?
For each object, there is no computational difference between world and object space. In fact, world space is usually slower.
For world space representation, you put the object into world space where you have other parameters, do the math, and then put the world space into eye space. I e, object -> MODEL -> processing -> VIEW.
For object space representation, you put the world-space coordinates into object space, by applying the inverse of your MODEL transform to them. Then, you process in model space, and take the results directly to eye space. I e, object -> processing -> MODELVIEW. This is one less transform.
Note that transforming the parameters into object space (i e, rotating your cube map using the texture matrix) happens once per OBJECT, whereas transforming object to world space would require doing ot for each VERTEX which is substantially more expensive.
Some implementations may be able to go straight from object space to projected space by multiplying the object output by the composite MODELVIEWPROJECTION matrix, btw.
jwatte, I think you missed a point : I’m doing dependant texture reading.
In the fragment program, I have to access a cube map, and the right coordinates to access it are the coordinates of the world space normal, as my texels within the texture are in world space.
I will never render to texture for each object, so I’ll stuck to world space cube map. It means that my normal must in some way be in world space within the fragment program.
I use bump mapping, so I get the normal from some texture. The texles could be express in tangent space, in object space, or in world space. I need to transform that normal in world space.
If it is in tangent space, I need a tangent -> world matrix transform. The tangent space is vertex dependant, so I need to compute it on a vertex basis anyway. It can be done by transforming normal, tangent and binormal in the vertex program and sending it down to the fragment program using unused texture coordinate channels [method #1]. Or, it can be done partially in the vertex program, transforming T&B&N from tangent to object, then in the fragment program transforming the fetched bumped normal from tangent to object using T&B&N, and then using the world transform of the current object to get it to world space [method #2].
If the bump map texture is in object space, I have to transform it from object space to world space, and this can only be done in the fragment program [method #3].
If the bump map texture is already in world space, I save a matrix multiplication, but it works only for static geometry [method #4].
Methods 2 and 3 involve a mtrix transform in the fragment program. As my application is fragment limited (at least I intend it to be) I must save the maximum operations in this part. So method 1 (the one I’m using at this point) and method 4 are good candidates for whole world bump mapping.
Hope this is clear,