Reflections

Ok, there are 2 questions I want to ask about reflections:

  1. What is a sphere map. I looked at all the tutorials and from what I here it can only reflect bitmap images but not any rendered scenes such as rendered quads, polygons etc… Please explain in detail if possible. Other tutorials seem to give me misconceptions.

  2. Is there any way to just have a relective material that relects everything in the scene (ex. the way Pov-Ray does it) without the planar reflection techniques. It seems to make programming a little more hectic if I have to do that for every object and every plane .

Thanks
cwhite40

No, OpenGL cannot “automatically” create
reflections for you. OpenGL just draws the
one triangle (or quad, or strip) that you
ask it to, with exactly the texture, color
and lighting you ask it to. OpenGL only
uses information knowable to it in that
instance.

Yes, you have to do all the extra
calculations and multiple render passes to
get reflection into your engine. What’s
worse, OpenGL doesn’t (yet?) support the
cubic environment mapping way of doing
reflections, which is easier to set up and
create using hardware rendering (basically
you render from the point-of-view of the
reflective “thing” into an area which you
then turn around and use as a texture).

Good luck! As you say, it can become
“hectic”.

OpenGL support cubic environment mapping with GL_ARB_texture_cube_map.
The core(without extensions) OpenGL supports SPHERE_MAP texture generation (predominantly used for environment mapping and not actual reflections).There is extensions like GL_NV_texgen_reflection which make the life easier ,but doing your own math is (almost always ) faster in current hardware.
The most optimal way to get a reflection-mirror is to draw your geometry, flip an axis and redraw the geometry (with optional stencil testing for optimal performance ).
Forget what you have learned from Pov-Ray

Yes I know that method but it seems to be limiting. For example, what if say you wanted to put a reflection of the scene onto a cylinder, sphere, or torus. Do you have any code ( please any ) that could clarify this?

cwhite40

My method works only with planes .If you want reflection on a arbitrary shape ,then you have to render your scene(from the point-of-view of the
reflective “thing” as bgl said) to a texture(read the color buffer and store it as a texture) and map that texture to the sphere(here is the SPHERE_MAP texgen mod ) or the other shape. But this is very slow because each frame you upload a new texture to the hardware.

Any code/websites for that?

cwhite40
coffee

I just figured out a pretty crap way to make a reflection. I had a 3D cube with a reflection off of a floor. First I rendered the cube then I made the floor using GL_QUADS and I added some blending to make it transparent and then I rendered the same exact 3D cube except that i put directly under the original cube and the floor, to make a “reflection.” Oh yeah I used blending for the second square too.

Hope this helps somebody

The Nvidia site(http://www.nvidia.com/Developer.nsf) have a heavy load of examples .
Keep in mind that the reflections around spheres, cylinders etc are called environment mapping .The reflection term is used for planar reflections.
I’m clearing that to help you find the right document.

Ok about the above messages… I used a program called blender which uses the above methods for “reflections”. I know what you mean but does anyone have code to make a rendered scene a texture? Also as a second question related to that does anyone also have an OpenGL scene to Bitmap/tga/whatever code? For instance rendering a scene to a bitmap/tga/whatever?

cwhite40

I haven’t tried it with a texture yet. Nehe has a tutorial about loading tga files. the url is http://nehe.gamedev.net

Spherical Environment Mapping takes a sphere that encloses an object. For each vertex that will be mapped, it is projected onto this sphere. These sphere coordinates are then turned into texture coordinates for use with a simple 2D texture. Cubic Environment Mapping does the same thing except with a cube instead of a sphere. There is also parabolic mapping and bump mapping.

The only way to make a true reflection from a sphere map is to render the world from the location of the object onto a texture used for environment mapping. It’d be easier to render onto a cube than a sphere, so cubic environment mapping should help you there.

If you want photo-realistic reflections like Pov-Ray, be prepared for slow rendering times. Realtime raytracing is not quite here yet.

/skw|d

well… it all depends on what you mean by real time. lengthy arg i’ve had before; don’t want to have it again. real time purely means bounded delay, ie. gauranteed response time, NOT a high frame rate.

But! there is some very cool transputer style stuff that does ray tracing on the fly. Verry cool, but (obiously) veru specialised.

cheers
John

[This message has been edited by john (edited 09-26-2000).]

That is nice, but that doesn’t help cwhite40.

How can I read from a color buffer to a memory bitmap? I’m using double buffering WITHOUT stereo, so I don’t know exacly what color buffer to choose for reading. And how to I use glReadPixels()? I need to know this to make a reflection and take a snapshot of the scene…

Thanks…

Yah what he said.

cwhite40

Originally posted by cwhite40:
[b]Also as a second question related to that does anyone also have an OpenGL scene to Bitmap/tga/whatever code? For instance rendering a scene to a bitmap/tga/whatever?

:eek: cwhite40 :eek:[/b]

The OpenGL 1.1 glCopyTexImage2D is supposed to load texture image data directly from the framebuffer. So, if your question is related to mirroring and reflections, you could render the scene with the viewpoint as viewed reflected by the (mirror) polygon into the backbuffer, CopyTexImage it into the texture, the render the actual scene.
In order to update a texture, don’t forget glBindTexture first.
Have good mirrors!

Thanks for everyone’s help!! Now it’s off to rendering city and other things!!

cwhite40

Originally posted by KRONOS:
How can I read from a color buffer to a memory bitmap? I’m using double buffering WITHOUT stereo, so I don’t know exacly what color buffer to choose for reading. Thanks…

I’d say the back buffer:

  1. render reflection texture from the point of view of the reflecting object (goes into the back buffer)
  2. copy back buffer to texture image
  3. render scene from camera point of view, map texture image on reflecting object.
  4. swap buffers

Maybe you could also set up a special buffer for only rendering the reflective textures (e.g. for a second OpenGL context using only this one invisible buffer), and then just use this buffer as the reflection texture.
That would save the copying everytime and result in better performance:

  1. render from reflective objects point of view into invisible buffer
  2. bind invisible buffer as texture
  3. render reflective object with the reflection texture into back buffer
  4. repeat steps 1-3 for every reflective object
  5. swap buffers

Correct me if I’m wrong

Originally posted by Dodger:
Maybe you could also set up a special buffer for only rendering the reflective textures (e.g. for a second OpenGL context using only this one invisible buffer), and then just use this buffer as the reflection texture.
That would save the copying everytime and result in better performance:

The best for performance is to employ glCopyTexImage2D (the image copy remains into the card’s graphics memory), but, since you have to preceed it with glBindTexture on (the texture of) the final OpenGL context, you cannot in principle do it between the two different contexes. Or, probably better explained, glCopyTexImage2D has an unique target context for both src and dest.

I don’t know if this would possible instead with the old, non-texture-object, approach to texture definition.

So, for the two-window comodity, you have to switch to explicit pixels read-texture redefinition approach.

This may be a bit off topic, but speaking of glCopy(Sub)TexImage2D, we’ve used it for modifying an image at each frame, thinking that, yes, it should be way faster than a glReadPixels and glTex(Sub)Image2D because it involves sending the data to and from main memory through AGP, whereas glCopy works on the video memory. Well, no, the later is way way slower. So, if there are any nVidia (We’ve tested this on a GeForce card) guys out there, can you give us some explanation ?