Environment map: a set of parallel lines

Hi all!

Currently, I’m working on a prototype that simulates reflection of a black-and-white stripped plane on an arbitrary 3D model. The scene should look as if the striped plane is parallel to the view plane and is not moving as the scene is rotated or panned. I know this goal can be achieved with help of environment mapping.

I’ve been trying various texture coordinate generation modes (GL_SPHERE_MAP, GL_OBJECT_LINEAR and GL_EYE_LINEAR, the two latter with different plane parameters). But the results are far not as I expected: with sphere map, the lines “reflected” from the model are distorted; with eye or object linear mapping, …well, also a complete mess.

Could anybody please help with:
a) choosing the right texture coord generation mode for my specific case;

b) if it’s eye or object linear mode, choosing the right plane parameters.

The texture I’m using is simply a 64x64 square with evenly spaced black and white parallel stripes. Probably, it should not be like this, does anybody know?

Thanks in advance!
Sergey

You could build this into a cubemap or spheremap, but that loses frequency information in places.

SGI invented a novel solution for automotive industry applications called cylinder mapping, where a 1D texture is mapped onto an object as if it were infinitely extruded. This requires a mode of texgen that is not in the basic fixed function OpenGL pipeline.

Look in this document for “cylinder”, it describes the high level software interface in the Optimizer API.

http://techpubs.sgi.com/library/tpl/cgi-…_html/ch08.html

As you can see from this document even SGI microcode didn’t support this so they computed the coordinates in software. These days you could do it in hardware with a vertex shader or even a fragment shader (for very slightly better results).

It’s the highest quality possible thanks to the texture frequency (MIP LOD) being determined by a single axis.

One simple alternative would be to use a really high resolution cubemap with light tubes rendered to the cubemap faces enable the highest degree of anisotropic filtering possible, but there are obviously limits to what’s possible vs a very large 1D texture.

You definitely can’t just use spheremap and get away with it, for one thing that’s locked in eyespace and you’d need to change the sphere image each time the eye moved in relation to the tubes, and there are spheremap artifacts and serious frequency limits. You also have to know how to generate the spheremap if you do it this way, even if the envmap is fixed in eyespace you still need to do it once. I generate a spheremap here:

http://www.sgi.com/products/software/performer/brew/envmap.html

you could do something similar but like I said it’s not the highest quality way to do this.

Hi dorbie!
Thanks for a nearly comprehensive answer! I guess I’d really have to use shaders to cope with this problem, if I intended to achieve the maximum scene quality.

However, as I have mentioned, I just need to create a prototype application (or maybe a demo sounds better) quickly, thus the sphere map seems to be the choice for me, doesn’t it?

To get the most from the sphere map, I suspect I should distort the texture image before using it…

And you’re also right about the environemnt texture being locked in the eyespace, and additionally the texture has a periodical structure. Therefore I don’t have to build a complex map like in your example. Am I right?

Best regards,
Sergey

You need to get the map from somewhere, so you need to generate it at least once.

The simplest way if you can’t handle that math (I think I describe it in my example page though), is to just draw cubemap faces. Each face will have the lines drawn with different orientations but pretty much you’ll have 4 faces with stripes and two with radial lines and they’ll match. You don’t have to try to do this, it’ll just happen if you draw parallel stripes in 3D with 6 face orientations.

It can be tricky to get the face orientations right, but it’s a prerequisite for my spheremap approach too so you don’t really have a choice if you want to be accurate. Spheremaps have wrapback problems at the silhouette so cubemaps are technically more correct (although they might have edge filtering issues) they will also have less loss of frequency in places I think.

Yup, I do describe how to generate the spheremap on that page but it’s pretty high level.

genmesh.c is not available unless you have the developer’s toolbox (and even then I’m not sure these days).

Several things frustrate me about my old demos and their availabiliy.

http://www.sgi.com/products/software/performer/whitepapers.html

Thanks for the additional explanations! Now it’s more or less clear to me.

By the way, in your example it is said that the texture map can be packed into a “2x3 sub-textures” rectangle to eliminate black regions in the texture - I guess there’s some trick that allows generating the right texture coordinates in this case, however I don’t see any means to affect the OpenGL shpere map algorithm - could you please clarify this point a little bit?

Hi dorbie!

If you have a look at
http://www.infocom.nnov.ru/bad_mapping.jpg - it’s what I get in my demo now (the sphere is drawn at point [0, 0, 0] in world coordinate space) - you will definitely see the difference from the correct result I obtained uisng NeHe’s Lesson23 demo and the same image file: http://www.infocom.nnov.ru/correct_mapping.jpg

I’m completely confused. NeHe’s demo is simple and clear - nothing to misunderstand in fact.

My result looks as if the generated texture coordinates were shifted somehow. However, forcing the identity transformation for texture coordinates right before drawing the model didn’t help, neither did applying any translation along s or/and t, so it’s unlikely to be a texcoord transformation error.

Unfortunately, my own demo is based on a huge existing application including an OpenGL-based CAD model rendering library, and practically the only way to debug is grepping the existing sources for some specific OpenGL calls…but now I’m exhausted and have no idea at all how to proceed.

Thus, I would be extremely thankful if you gave me a hint about the potential origin(s) of my problem!

P.S. I use MSDN library as a basic OpenGL reference. Unfortunately, it contains very poor description of glTexGen() functions - it definitely lacks illustrations and examples. The OpenGL Red Book also pays little attention to this. Therefore, any recommended readings dedicated to this OpenGL feature would be appreciated, indeed!

Best regards,
Sergey

Could anybody except dorbie help me please?

Hi all!

I’m still thinking about the possible reasons for http://www.infocom.nnov.ru/bad_mapping.jpg…

After spending hours thinking on description of glTexGen() behavior in case of GL_SPHERE_MAP mode, I wonder if the following two things may spoil the result scene:

  1. Normals generated for my sphere might not be normalized.

  2. Some transformation may be applied to MODELVIEW and PROJECTION matrices…In the formula describing reflection vector calculation in GL_SPHERE_MAP mode, a unit vector from “the origin” to the current vertex is mentioned - what happens if e.g. my sphere has been translated? I guess the most pleasant look can be achieved only if a sphere is located at the coordinate system origin.

Thanks in advance!

Originally posted by sansan:
Could anybody except dorbie help me please?
w.t.f.

:slight_smile: I assume you thought I was ignoring you, I wasn’t I just don’t camp here. It’s an open discussion forum, not a support line.

I think your mapping is determined by the orientation of the stripes when you do the cubemap render, and IMHO yours looks more correct than theirs, but it looks like you drew rings and not parallel planar stripes.

A cylinder map with parallel stripes should have a couple of vanishing points and they should be visible on a sphere unless you get close and they are at the edge. I suppose the vanishing points may not be there because you’d have to render infinitely long stripes, but there should be an obvious converging point of extinction. The rest is just the orientation of the stripes w.t.r. the cubemap faces.

Originally posted by dorbie:
:slight_smile: I assume you thought I was ignoring you, I wasn’t

:slight_smile: Not exactly like this - I’d rather say you proposed a bit complicated solution - it seems now it’s enough for me to make sphere mapping work. I’m sorry for being impolite - I didn’t mean to, really! :stuck_out_tongue:

To be completely clear, my texture is just a square with evenly spaced vertical black and white stripes - not a cube map.

That’s why I’m so surprised to see such a strange picture in my app - while it looks so pretty in “pure OpenGL”-based demo. I don’t know how to get such “bad mapping” using the texture I’ve mentioned and GL_SPHERE_MAP mode :frowning: :frowning: :frowning:

So now the questions sounds not like “how to draw etc.” but “how could I draw such an awful scene using a very simple texture and GL_SPHERE_MAP?”