I’m new to OpenGL and graphics programming in general. My linear algebra is ok, a little rusty maybe but I’m brushing up.
One thing that’s confusing me at present, probably because I’m new to this, is the talk I hear about image and texture “space”.
Some of the threads on these boards and the OpenGL usenet group have dealt with shadow creation, for instance, and how you can store depth information in a texture for later use. There’s talk aplenty of converting the texture’s “image space” (which I assume is synonymous with “texture space”) to world space, or camera space, or screen space…and it’s spacing me out. How do these conversions work?
I understand the basic 3D pipeline transformation (still a bit hazy but I grasp the essential world->camera->screen flowchart), but how can you extract useful 3D information from a texture, and where does “image space” come into it? Are there any lessons out there about how storing depth or other information in a texture (then later extracting it) works, because I can’t see it…simple ignorance on my part. Thanks for nudging me in the right direction.
The talk about the different “spaces” is definitely an advances topic. It is heavily used in hardware shaders when not using the standard texture & lighting pipeline of OpenGL.
World space is the space where you place your polygons etc.
View space a representation of worldspace as seen thru the camera.
Screenspace is the 2D representation equivalent in what you see on your screen.
“Texturespace” is used differently by different people and is not related to the above.
The spaces are simple concepts that are not advanced topics.
Object space is the space that raw vertices exist in before transformation and it just means coordinates local to the object. Eye space means vertices transformed through the MODELVIEW matrix to a space local to the eye. Then you have screen space which is the space coordinates exist in after transformation through the PROJECTION matrix (there’s an extra viewport transform in there but that’s a detail you probably don’t need to worry about).
Texture space would be the space defined by the derivatives of texture coordinates on individual triangles, i.e the direction of s & t on a triangle would define texture space, looking at a 3D triangle this becomes a 3D plane but in a texture image it’s just a 2D surface in what you might call image space. Another parameter from this 3D plane describing texture space this is the surface normal and you have texture s & t and a normal making up a coordinate frame commonly referred to as tangent space. This is often calculated and averaged at the vertices and really exists independently of texture space because it is interpolated over triangles to produce smooth changes in derivatives.
Image space can mean multiple things and it would depend on the context of the discussion.
A transformation from one space to some other space is typically done using matrix multiplications and again depends on the discussion.
You can invent new spaces, for example a common concept used is world space. World space is the implicit space between the model and view matrix transformations. It only exists conceptually and has no representation in fixed function OpenGL and is implied by the viewing matrix being built with negative transforms on the MODELVIEW matrix. When you have the viewing matrix only on the MODELVIEW matrix then object space is world space, but then you move objects around in the world by multiplying model transformations with the viewing transformations on the MODELVIEW matrix and object space no longer matches world space.
You can invent even stranger spaces. For example I might have a slide projector in a 3D scene projecting light from a slide into the world. I could call the projection of this slide the “slide projector” space and define it as the space local to the position and orientation of the projector. I could then extend this concept and talk about the image space of the projector being defined by the projection matrix of the slide multiplied onto the slide projector space.
How you get to that space for texture coordinates depends on where you start but texgen in eyespace followed by an inverse viewing transformation to world space followed by a projector translation and orientation followed by a projection matrix (glFrustum) all on the texture matrix could easily do it for you and generate projected texture from the slide into the scene because you loaded a transformation from the eye space to “slide projector” space onto the texture matrix and multiplied it by a projection matrix representing the transofrmation to “projector image space”. You would want to add a scale in translate to get you to 0 to 1 texture coordinates after the projection instead of -1 to +1, this is the projectors image space analog to the viewport transform in the OpenGL fixed function pipeline.
It’s actually very easy once you grasp the basic concepts, the second part is a bit tricky if you’re not familiar with the first but if it’s too tough put it on the shelf and revisit it after you get the basics down.