I'm confused

I’ve been in OGL for some time, I’ve already tried out game engine creation & vertexprogramming & I really like the environment But, I have one question that creates some doubts. OGL is partially extension based, so if I want to use new FX then I must use SOMTHIN_that_new_card_supports, but I’m little affraid what will happen if I’ll try to write more complex programs. Currently I’ve one program with Cg & that means NVorARB_vertex_program unsupported by TNT family & soon I’ll have to move to ARB_fragment_program at least gf3 But there is even more useful extensions point_sprites, mipmap_sgis etc. How do advanced game & FX programmers handle this stuff? Do they write 20+ different pathways in code to support old HW. And what happens in DX?

Actually, for ARB_f_p you’ll need at least a Radeon 9700 or GeForceFX.
But yeah, you have to write different code paths for older hardware. Or have some kind of script based system (like the D3D effect framework or CgFX) that allows you to specify different techniques for the same effect depending on the targeted hardware.
In DirectX this is no different, except that DirectX has no extension mechanism. But you also have to query for hardware capabilities and code different code paths.

Then this must be wrong? [link]http://cgshaders.org/forums/viewtopic.php?t=626&sid=89fb9befc210e97526ae7b00d65e0a3a[/link]

In that post Jason is talking about the long anticipated fp20 profile for Cg. That means it’ll compile down to the nvparse texture shader/register combiner scripts. That’s supported by GF3 onwards. ARB_fragment_program is a whole different story. It offers true fragment programmability, unlike the register combiners.