Cubemap reflections without distortion in the FFP

I was wondering if it was possible to implement cube reflection mapping on the fixed function pipeline, without the annoying distortions / waviness you normally get. I’ve implemented this using GLSL, but compatibility testing has been a nightmare, and I wanted to see if it was possible to use the fixed function pipeline instead.

I understand that the distortions are due to the linear interpolation between vertices. I get this same effect if I normalize my texcoords in GLSL. If it were possible to set up gltexgen on the R, S, and T axes, but have OpenGL auto-multiply the coordinate by each vertex normal, I think it would look right. Is there any way to do this in the FFP, or does anyone have another solution?

Thanks.

The whole idea behind having shader-based technologies like glslang is to be able to do these kinds of things. No, the fixed-functioned pipeline cannot multiply a varying by another varying and then use that as a texture coordinate.

You might get one of the non-glslang shader forms to do it. ARB_fp can, as can ATi’s Radeon 8500 shader stuff.

Okay, I was just wondering.

For the first release of my engine, I think I want to stay away from GLSL, due to the incompatibility problems. I’m very disappointed with how shaders have been implemented, for the following reasons:

-Shaders get run on the CPU on older hardware, which is unusably slow, with no reliable way to tell the program this has happened. All the users knows is that the program runs at 2 FPS.

-The number of instructions required just to emulate the fixed function pipeline exceeds the instruction limit on older hardware.

-ATI cards sometimes cause crashes with no explanation.

-Inconsistencies in the API: some cards ignore built-in variables, ATI doesn’t support for-next statements, NVidia ignores clip planes but ATI doesn’t.

-10-30% reduction in framerate, just by using a shader that does the same thing as the FFP.

-Added complexity. If ATI and Nvidia couldn’t even release drivers without a few bugs now and then with a fixed-function pipeline, they’re really going to have a lot of bugs when they are trying to release drivers for a programmable pipeline.

The whole concept of relying on shaders seems wrong to me. I mean, they are great for a few things you wouldn’t be able to do otherwise, but it seems to me that a fixed function pipeline that can handle bumpmapping and a few other features properly would be faster, more reliable, and easier to use. I’m going to have to rely on them, since there is no alternative right now, but I think eventually we’ll see a return to a fixed-function pipelines with more features. And then the marketing will say “look, the features are built right into the hardware, for ultra-fast processing!”.

-The number of instructions required just to emulate the fixed function pipeline exceeds the instruction limit on older hardware.
That’s because the programmable shader units on old hardware really is not capable of replicating the fixed function pipeline :stuck_out_tongue:

it seems to me that a fixed function pipeline that can handle bumpmapping and a few other features properly would be faster, more reliable, and easier to use
Easier to use, perhaps, for simple cases. Have you ever tried to implement something like dot3 bumpmapping without shaders? I don’t know about you, but with shader it’s straigh forward, while with combiners it’s really complicated.

And faster, more reliable? No.

Current hardware does not have fixed function built into the chip. At the current level of technology it doesn’t make sense to waste transistors on a few special cases that can be solved by the programmable pipe.

The driver just inserts suitable shaders when you use the fixed function. So the fixed function can never be faster or more reliable than shaders (on new hardware).

halo do you really think people without cards with decent shader support are going to be playing fps games such as your engine seems optimised for?
You put a minimum hardware requirement on your engine and produce something more competitive in a feature driven market. Nobody is going to care that you support geforce3’s when you finally release your engine.

That’s because the programmable shader units on old hardware really is not capable of replicating the fixed function pipeline.
That’s my point. A lot of times these cards will compile the shader and run it on the CPU, with no reliable way to alert my program to this, so the program gets 2 FPS with no explanation to the user. If you’re programming a tech demo that only needs to run on one machine, shaders are much easier, but lately I have been going to Best Buy every weekend to buy more graphics cards to test on, and to return the cards I bought the previous week!

About 27% of Steam users cards default to DirectX 8 or lower. 54% default to shader model 2. These are DX specs, but you can make guesses about their OpenGL capabilities from this:
http://www.steampowered.com/status/survey.html

The problem isn’t shader cards vs. non-shader cards. I can easily support both. The problem is good shader cards versus crappy shader cards that have a low instruction count limit, particularly the ATI x550-x800 series.

I think Valve’s decision to make HL2 compatible all the way back to crappy Intel graphics was a good one, because it made their potential market much bigger. I can’t afford to cut out perhaps 35% of my market.

Now I have shader rendering written already, but I think for the first release I am going to disable it, because it will probably double the amount of initial bug reports and compatibility issues I have to deal with.

fair enough. what’s the time scale you’re looking at? this is an engine you’re writing, yes? so you’re expecting another developer to use your engine to write a game?
if so, you’ve got to ask yourself how long before that developer releases that game?
what year will you be in then? how many of the people with gf4mx’s will still expect new software to run on that card? it’s the equivalent of me expecting the latest game releases to run on my p2 266mhz 32mb + matrox mystique + voodoo 2 combo.
You’ve got to think ahead and be realistic, son - no matter how much fun you’re having tinkering with unsupported extensions and weird combiner switches.
Valve are currently writing Source Engine 2, and you’re using their last generation middleware as a reason to target hardware ancient by todays standards in 3 years time?

-Shaders get run on the CPU on older hardware, which is unusably slow, with no reliable way to tell the program this has happened. All the users knows is that the program runs at 2 FPS.
Check for:
ARB_shader_objects + ARB_shading_language_100 + GL_ARB_fragment_shader
Or:
GL_VERSION >= 2.0
Such hardware should support GLSL.
You should also not use functionality that is not widely suported. You may want to have a look at this thread: http://www.opengl.org/discussion_boards/ubb/ultimatebb.php?ubb=get_topic&f=11&t=001330#000009

-ATI cards sometimes cause crashes with no explanation.
Yes. I’m performing some test on application startup to determine if GPU can do VTF or FP16 filtering/blending. One of these tests throw eception on ATI. It can be caught with simple try/catch block and in such case I assume that feature is unsupported or unreliable.

Inconsistencies in the API: some cards ignore built-in variables, ATI doesn’t support for-next statements, NVidia ignores clip planes but ATI doesn’t.
“If gl_ClipVertex is not specified and user clipping is enabled, the results are undefined” - the problem is, that gl_ClipVertex is not supported on Radeon 9 / Radeon X (I don’t know about X1k).
My solution looks like this:

-10-30% reduction in framerate, just by using a shader that does the same thing as the FFP.
True for older hardware. On modern hardware, shaders can actually give some extra speed over FF. Memory access does not become much faster, but computing power is. So on modern GPU it can be better to compute some value using math rather than performing texture lookup. Especially if you’re allready using a few textures.

-Added complexity. If ATI and Nvidia couldn’t even release drivers without a few bugs now and then with a fixed-function pipeline, they’re really going to have a lot of bugs when they are trying to release drivers for a programmable pipeline.
True, there were some nasty bugs in GLSL implementation, but most of them are fixed now. The truth is that any new functionality can have bugs. The more complex it is, the longer it will take for the drivers to become good enough, but GLSL has been here for a while.