DX Effects - would be nice if GL had something like this

I’ve been looking into the DX utility library, and it has a nice way of constructing vertex/pixel shaders, with things called ‘effects’. They’re basically shader files which can define multiple ways of achieving a particular effect (such as diffuse+specular bumpmapping in 2 or 4 texture units).
I think glu should be expanded to include something like this.
What do you think?

Here’s an example of an effect that can add to 2 textures together with or without 2 texture units:-

Step 1: Define an effects file…
// Sample Effect
// This effect adds two textures, using single pass or multipass technique.

texture tex0;
texture tex1;

// Single pass

technique t0
pass p0
Texture[0] = ;
Texture[1] = ;

    ColorOp[0] = SelectArg1;
    ColorArg1[0] = Texture;
    ColorOp[1] = Add;
    ColorArg1[1] = Texture;
    ColorArg2[1] = Current;
    ColorOp[2] = Disable;


// Multipass

technique t1
pass p0
Texture[0] = ;

    ColorOp[0] = SelectArg1;
    ColorArg1[0] = Texture;
    ColorOp[1] = Disable;  

pass p1
    AlphaBlendEnable = True;        
    SrcBlend = One;
    DestBlend = One;

    Texture[0] = <tex1>;
    ColorOp[0] = SelectArg1;
    ColorArg1[0] = Texture;
    ColorOp[1] = Disable;  


Step 2: Load the Effect File
ID3DXEffect m_pEffect;

// Assumes that m_pd3dDevice has been initialized
if(FAILED(hr = D3DXCreateEffectFromFile(m_pd3dDevice, "effect.fx", &m_pEffect, NULL)))
    return hr;

if(FAILED(hr = FindNextValidTechnique(NULL, &technique)))
    return hr;

m_pEffect->SetTexture("tex0", m_pTexture0);
m_pEffect->SetTexture("tex1", m_pTexture1);


Once the effect file is created, ID3DXEffect::FindNextValidTechnique returns a technique that has been validated on the hardware.

Step 3: Render the Effect
UINT uPasses;

if(FAILED(hr = m_pd3dDevice->SetStreamSource(0, m_pVB,
    return hr;

m_pEffect->Begin(&uPasses, 0 );

// The 0 specifies that ID3DXEffect::Begin and ID3DXEffect::End will
// save and restore all state modified by the effect.

for(UINT uPass = 0; uPass < uPasses; uPass++)
    // Set the state for a particular pass in a technique
    m_pd3dDevice->DrawPrimitive(D3DPT_TRIANGLESTRIP, 0, dwNumSphereVerts - 2);



OpenGL 2.0 provides much more than simple computations like that. Besides, that Direct3DX stuff isn’t necessarily optimized.

Err, no - that was just a simple example.
It also enables you to embed vertex shader and pixel shader assembly code as well.
I haven’t read much on ogl2.0, so it’s nice to hear there’ll be something like this in there. When will 2.0 be released? Plus, how will it be released? Are microsoft going to cooperate? Or will it be just a huge list of extensions?
I think I’ll be directing most of my energy towards d3d from now on, opengl has become very untidy, and downright difficult (read annoying) to use - direct3d is very, very nice to use.

Noooo… dont turn to the dark side!

see here for examples of opengl shader code http://ad.doubleclick.net/adi/xtrm.dart/graphics;kw=button2graphics;sz=160x90;ord=2614818812?

varying float lightIntensity;
varying vec3 Position;

uniform float Offset;

void main (void)
vec4 noisevec;
vec3 color;
float intensity;

noisevec = texture4(7, 1.2 * vec3 (Position.x +0.5,
    Position.y +0.5, Position.z+0.5-Offset));

intensity = 0.75 * (noisevec.0 + noisevec.1
    + noisevec.2 + noisevec.3);

intensity = 1.95 * abs(2.0 * intensity - 1.0);
intensity = clamp(intensity, 0.0, 1.0);

color = mix(vec3 (0.8, 0.7, 0.0), vec3(0.6, 0.1, 0.0),

color *= lightIntensity;
color = clamp(color, 0.0, 1.0);

gl_FragColor = vec4 (color, 1.0);


Now imagine making a couple of changes to an OpenGL 2.0 shader program. Suddenly the fire texture becomes a flowing earthen texture–all in real time (no recompilation of the actual program needed).

I do think that the D3DX effect files are pretty cool but the OpenGL 2.0 shaders just blow it away in how the effects are written. That little compiler thingy that 3dlabs released a while back is pretty cool. I just wish i was able to use the shader code i write in an app now. But i know i can’t, yet.


Originally posted by Nutty:
Noooo… dont turn to the dark side!

But Nutty man, it’s just so much more elegant than opengl now. I used to think that it involved too many constants and macros, but opengl has overtaken it in this area…and it’s all down to ati and nvidia not co-operating…the opengl api has disappeared under a layer of vendor specific dll entry points and psuedo implementations of asm - d3d has capability structures, and standard interfaces to advanced functionality. This is just better.

There’s so much more money in d3d jobs, too

[This message has been edited by knackered (edited 05-10-2002).]

There’s so much more money in d3d jobs, too

I wouldn’t go that far and say that. OpenGL is used for more than just games. D3D is only used in games. OpenGL is also used for scientific visualizations, architectural walkthroughs, even in modern aircraft in the instruments, etc. And im sure these kind of jobs pay more than a d3d game writer.


>>There’s so much more money in d3d jobs, too <<

aye! i assume its a joke

90%+ of d3d is used for games, game programming is the lowest paid programming work u can get

Thanks for destroying my lifelong dream guys. ;p

He he, knackered can go haunt the D3D newsgroups and attack all the noobs for asking easy questions. Maybe he’ll turn them to OpenGL :stuck_out_tongue:

Yes - I wonder if there’ll be a dorbie equivalent on those newsgroups…I do hope so.

The salaries in games programming look considerably more favourable in comparison to those in the simulation sector - all graphics programming is lower paid than, say, database programming - but that’s because it’s interesting.
The whole point is, I seem to be alone in thinking that d3d is now more logical and compact than opengl.

Originally posted by knackered:
The whole point is, I seem to be alone in thinking that d3d is now more logical and compact than opengl.

Grass is always greener on the other side of the fence.

What will you program for? pixel shaders 1.0, 1.1, 1.2, 1.3, 1.4? Will you program a codepath for each capbit?

Capbits are essentially the same as extensions, with the drawback that they are imposed by Microsoft instead of designed by an IHV and then selected to form part of the standard by a commitee because of its usefullness. Another drawback is that OpenGL ensures full implementation of the whole standard, while in D3D you always have to check the capbits.

Agreed, if you limit yourself, say, to pixelshaders 1.1, you have a common interface to program all ps 1.1 capable cards. But is it really that common? some ps 1.1 cards have a -8.0 8.0 range, while others have a -1.0 1.0 range, the result: you will never know if your program works until you’ve tested every kind of card (you could sort of overcome this by testing your program against the reference rasteriser, not something you can do with a full blown app, I guess).

Almost all the functionality of D3D is already standard in OpenGL or presented via extensions in a common interface (ARB_Cubemap, pbuffers). So which are the real problems of OpenGL: the shading language extensions, namely “pixel & vertex shaders” (with pixel shaders you can forget about texenv/env4/crossbar extensions), and the vertexarray extensions (Fence vs. Vertexarray objects). For the latter you can still use either vanilla vertexarrays or compiled vertex arrays and expect the OpenGL driver to do its best.

In theory OpenGL 1.4 should bring a common vertex program interface, there still will be the problem of having a common “pixel shader” interface, but I think that most IHVs will eventually implement either NVidia or ATI spec, or opengl 2.0 shading language, depending on the timeframe and the graphics card capability.

Note that I see both D3D pixel & vertex shaders as a hack with very limited functionality to solve a temporary situation. I think that a high level shading language is the key so every IHV can implement it in the most efficient way for its hardware.

OpenGL 2.0 object & sync management will solve the VAO vs. VAR problems, and there shouldn’t be much problem to implement those in current graphics cards.

BTW AFAIK those “technique/pass” scripts are from ATI’s 8500 demo engine (see Alex Vlachos’ GDC presentation), so they may be even ported to OpenGL eventually (IP issues allowing).

EDIT: Added VAO and VAR to the non-common interface.

[This message has been edited by evanGLizr (edited 05-11-2002).]

I suspect there are quite a few database engineers CVs kicking around these days so that may not be the best career move.

Tell me about the great game dev salary after you get the job. It’s not universal but much of the game dev community seems pretty insular, I know one engineer I respect, he couldn’t get a job in game development, certainly not at a competitive salary. He told me about some of his interview experience and some of his interviewers were literally ignorant, I told him it was probably a lucky escape. There seems to be an attitude that you’ve got to ‘pay your dues’ with many in that industry requardless of relevant experience, and it’s not just programming. I know a excellent database artist/engineer and she couldn’t get a decent job either, similar story. She’d built real-time graphics databases for simulators and had experience these guys badly needed, but because she’d never worked on a game (which up to then had been 2D) they tried to tell her she didn’t have any experience. She went back to working on military simulators, which was a shame because she was a pacifist.

Some graphics jobs are well paid if you are good and I wouldn’t expect the median game dev salary to be the best. Perhaps the standard deviation in game dev salaries is higher than other sectors but I have no direct evidence of this.

As for D3D, I think you should go for it knackered, you’ll never look back.

[This message has been edited by dorbie (edited 05-11-2002).]

Cheers dorbie. It’s not an either or situation, I still have to support gl for sgi’s in my current job, but my renderer also has the ability to create d3d contexts - and most simulators we’re writing are targeted at cheap NT5 boxes.
I understand that d3d just has a nicer way of dealing with extensions - but the interfaces to available functionality is standardised - this has become really important to me. At the moment, my GL renderer will only run on gf2 or above, with some limited support for radeons…and the code is getting messier and messier dealing with special cases (VAR versus VOB’s etc.). I was thinking about doing something similar to d3d’s effects for GL, but then I noticed that d3d already has this, so it immediately looked more attractive to me than GL.

Knackered, if your engine is a mess then that’s your fault, not OpenGL’s. A poor craftsman blames his tools

In my engine, I’m just putting the final touches on something similar to the D3DX “effects” idea that started this thread. It covers all my shading needs, and there’s nothing messy about it.

For the triangle rendering itself, I have a set of renderers that can make use of whatever optimizations are available. I currently support standard immediate mode rendering (for OpenGL 1.0 ), vertex arrays with glDrawElements(), and of course VAR/fence. Adding ATI VAO support would be trivial, and would not require any existing code to be modified (except for the small routine that decides which renderer is optimal for the current hardware). The renderers are hot-swappable, too. Again, no mess here.

Another example is rendering to a texture. This is a pretty common operation, but there are three different ways of doing it and not all 3D cards agree on which way is the best. I wrote a common interface to all three of them, so I can transparently use whichever one the hardware prefers.

It doesn’t have to be a mess if you put some thought into it.

– Tom

Tom, my engine is not in a mess - just the opengl parts are a mess. I too have a transparent layer to the rest of the app, but underneath the opengl layer is too much vendor specific stuff (not capability specific) - it just doesn’t ‘feel’ right for me, I like things to be neat. The direct3d layer is looking really nice, this is why I’m considering abandoning GL completely (we haven’t had any IRIX projects in a long while).
Originally, the whole engine was GL, but because of limitations with the nvidia drivers, I had to implement d3d for a secondary display. Now I’m developing the d3d layer to be as capable as the GL layer.
I cannot understand the agro this is causing in this thread…I was originally just pointing out that d3d has a nice standardised shading system already in place, and that maybe GLU should be expanded to do something similar - I then mentioned that d3d is doing a lot of things right in its latest release - and that GL has become unwieldy.

I guess people are just scared that they are not using the best tool and people will laugh at them.

It’s called the screwdriver syndrome:

“What! you mean you are NOT using an electric screwdriver!”

Well, for what its worth knackered, I thought it was a fair coment

I too long for the days of olde, when OpenGL code only had one code path… the real problem is, I am by my own admission too lazy to learn D3D8.x as an interim solution (as that’s all I’d really see it as… I have my hopes pinned pretty firmly on 2.0… I just hope I don’t get let down with a thud ).