Fixed Function Emulation?

Hi everyone,

i am curious to know how nvidia implements opengl fixed function pipeline in their drivers? Is it emulated via shaders internally or is it really still made of dedicated hardware?

What about amd?

Background is to better understand some performance aspects, like GL_LIGHT_MODEL_TWO_SIDE being slow in FF and so on…

Also any links to good pages shading :slight_smile: some light on the topic “how does a driver work internally” are appreciated.

Thanks a lot…

Given the fact that FF pipe line is depreciated and dropped from core, then your answer has to be it’s emulated (I could be wrong of course!).
So if you are developing anything for OpenGL 3.3+ I suggest you make it future proof and code for the core profile - hence everything must use shaders.
If you intend the audience is GL 2.x or a mixture of GL 3.x h/w too, then you can still code for FF when appropriate (compatability profile); of course some effects are too complicated in FF and shaders are a more natural way to express your ideas. I still use Compatability profile because my font and GUI rendering still depend on Displaylists.

Dedicated transistors do made some FF processing slightly faster than shader bases ones, but on the whole these type of events are usually not the main bottleneck in the application anyway.

I do not see why the answer must be that it is emulated. I am not saying that you are wrong, but graphics hardware is not developed for opengl only - in fact i think opengl coders are a minority - so assuming that it must be shaders just because opengl says so seems not conclusive.

Why did you ask this question if you’re just going to disregard the answer? And D3D10 killed off its fixed-function pipeline too; it’s not like the ARB made the decision in a vacuum.

In any case, hardware hasn’t had the fixed function pipeline since the GeForce FX/Radeon 9500 days.

It was not my intention to disregard the answer, but if i would just accept any answer without further questioning although it does not fit into my understanding or although it does not seem (in my world) logical i would have a mess of unconnected dots in my head soon.

Also, BionicBytes was the one who started questioning his own answer… (which i also do sometimes if i am not sure and which i think is a good thing).

So, if it looked like pure disregard, i am sorry.

That said, i simply did not know that

so thanks for clearing that up.

Just look at it from a business POV: when all benchmarks and games use shaders exclusively, and CAD apps in FF mode aren’t fillrate-limited, do you invest transistors for FF or for even more programmable power? :slight_smile:
To keep the CAD market, some rasterizer and triangle-indexing features need to be kept: wide lines, smooth lines/poly, edge-flags (for quads in wireframe). And then you also make low-precision versions of arithmetic instructions, as FF can easily stay limited to them. (note how the GLSL precision-qualifiers are more meaningful on geforces) .
Then, you create 2 versions of the drivers - one for gamers (geforce), and one optimized for pushing more geometry from glBegin() sysram to vram (quadro) .

Any remaining doubts :slight_smile: ?

Also note, how any of this meaning is pretty much nvidia ‘extension’ of GLSL :wink:

Sounds all logical without any doubts, but my question was more like “did it already happen” (the hardware-transition from FF to shaders) … obviously it did happen and nobody told me until now :wink:

thanks again

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.