What can you make?

I disagree - it’s now faster to get a d3d application up and running than opengl, with vertex buffers giving you far more efficient data transfer than glvertex calls - and without the need for VAR, VOB or CVA tomfoolery.
I give up on this debate, it’s getting a little boring - very few of you are giving objective analysis of the two api’s, it’s all ‘rocks’ and ‘sucks ass’…

Well your analisys, rocks, or did it suck ass? lol. But any ways. I know, i have a major hard on for OpenGL, but thats just me. And right now you can make a good living simply developing for Windows. But Lindows is comming soon. Its A linux system that looks, feels, and smells just like windows (without the aftertast, and stablitity issues). And when this is releast, I think Linux will become a much bigger player in the market. Making way for OpenGL to rein. Only time will tell. Oh by the way, (opengl Rocks, and DirectX Sucks ass) sorry couldnt resist.

[This message has been edited by dabeav (edited 06-07-2002).]

Its stupid to put down either API IMO. Each one has its strong points.

But in the end, you have to ask “What matters most”. I think that MS got the ARB boys to move their ass and update GL before it looked like ancient history. At the same time, DX learned a few things from GL and improved.

Let`s keep improving. (specially GL )

V-man

do you comparing d3d vertex buffers with glvertex calls ?
if you wont to be fair, you must compare them with vertex arrays, which are part of standard opengl specification since version 1.1.(VAR VAO & CVA are only extensions of this core functionality)

I’m aware of that, adrian - the point I was answering was one of speed of coding something. Nobody uses glVertex to draw things in GL, everybody uses vertex arrays - my point is:-
zero = coding a GL vertex array - coding a d3d vertex buffer;
But with vanilla d3d vertex buffers you get the optimisations only possible in GL with VA/VOB/CVA extensions, for free, no additional effort. It’s an interface that enables all these optimisations to be controlled by the people writing the drivers, at the expense of a little flexibility.
I agree that there are + and - for both API’s, but there are people commenting in this thread that do not agree with that - something to do with MS trying to protect their business or something, but nobody criticises nvidia or ati for doing the same thing with their proprietry extensions.

I’m think it’s all about how both APIs adds new futures:

  • In D3D there is ONE company which decides what will be supported.
  • OpenGL can be extended by any hardware vendor. If this extensions get accepted by the programmes (=are used) this extensions became ARB extensions(= get standarized).

that’s one of the reasons, why many people don’t like d3d no matter how good or bad it is and react so emotional in this d3d-debatte: not everyone on this planet believes that m$ should decide, which feature we can use or not. And there are many ppl that don’t like how m$ tries to control everything… and this is the point, where all discussions start to get emotional. no matter what you are arguing.

And i agree with you about DX8. With version 8 directX became more useful then in any other previous version. i’m just don’t using it anymore, because i’m bored with learnig a new D3D API every year… (and it costs me too much valuable time)

btw.: my first post in this thread was just a joke…

>>ut with vanilla d3d vertex buffers you get the optimisations only possible in GL with VA/VOB/CVA<<

test time
d3d vertex buffers compared to VAR/fence are?
A/ less powerful
B/ equalily powerful
C/ more powerful

Sigh…I knew this was going to get into a api war of somekind. Come on guys what the hell? Surely there are better and more productive things to talk about.

-SirKnight

I completely agree with AdrianD.

test time
d3d vertex buffers compared to VAR/fence are?
A/ less powerful
B/ equalily powerful
C/ more powerful

The point is, VAR/fence is only on NV cards. With D3D vertex buffers, you get similar performance on all cards that support the functionality, through 1 single interface. To do the same in OpenGL, you need to implement VAR on nv cards, and ATI’s equivelent on ATI cards, etc etc…

Although the results might be better, as it’s more native to the hardware, it complicates integrating different solutions into a single engine and takes more time.

Although the time taken to implement Vendor specific extensions is extremely trivial out of the entire time of a project.

I still prefer OpenGL, for all the reasons AdrianD stated, and others.

Nutty

Originally posted by SirKnight:
Sigh…I knew this was going to get into a api war of somekind.

Yup, those flamewars are quite boring. Especially when you know Glide rocks and GL and D3D suck ass… )

Julien.

Originally posted by zed:
test time
d3d vertex buffers compared to VAR/fence are?
A/ less powerful
B/ equalily powerful
C/ more powerful

The answer is C, more powerful.

From the vertex data aspect, they are equally powerful, that is, assuming the nvidia D3D driver writers did their job correctly and you lock the vertex buffers with the appropriate combination of the NOOVERWRITE and DISCARD flags.

From the index buffer standpoint, they are better in that your index buffers can be stored in vidmem also, which could potentially increase performance (although I cant recall for sure if the GeForce cards actually support this feature).

On the plus side, there is one simple mechanism for doing this all.

Potentially the most Bollox thread I’ve read in ages …

Come on - this is meant to be an OpenGL forum, at least in a loose sense.

Well don’t read it shag - do you have to read every single reply to every single thread?

Originally posted by deepmind:
[b]Yup, those flamewars are quite boring. Especially when you know Glide rocks and GL and D3D suck ass… )

Julien.[/b]
Gawd, Glide is flexible …

You’d need at least ATi’s fragment shaders to do all of that stuff pixel perfect and single-pass.

[This message has been edited by zeckensack (edited 06-08-2002).]