Busted!

I will show you here how Direct3D 10 renders useless in front of OpenGL.

1st, when I asked about how to do geometry instancing in OpenGL not too long ago, the answer from experts was as simple as that such feature is only required in direct3D since it involves considerable overhead per rendering calls which is unlike the case in OpenGL, so this feature is not really necessary but helps with D3D performance.

2nd, geometry shaders are accessible via OpenGL extensions, and the IHVs who support it in their hardware are able to make decent exposure through GL extensions. Not to mention this feature is still in progress development…lets say not yet standardized or even 100% clear.

The above 2 features are now the heart of D3D success and powerfulness claims over OpenGL 2.1. Now think about it, if you defended GL with these arguments, how come now you defend D3D with the opposite arguments? Which are the need of instancing and that fact that only D3D10 supports GS.

Thanks!

I have found instancing to be quite elegant in GL; and I have also not had issues with D3D10 draw call overhead in my programs.

I have found a couple of uses for the geometry shader - like computing simple normals for data that’d otherwise require processing on the CPU, and also emitting lines from points. I like that I can access it at all from GL with an extension.

Personally, I am looking forward to a clean GL 3.1, and D3D11’s tessellation and compute shaders. Perhaps it’s possible to like both APIs for what they are?

I am looking forward to a clean GL 3.1

Keep looking; they didn’t deprecate nearly enough stuff to make 3.1 clean.

D3D11’s tessellation and compute shaders

You can forget about “compute” shaders. Use OpenCL. OpenGL is a graphics API and will always remain so.

When I mentioned compute shaders, I was actually referring to D3D11 itself. I’m emphasizing that it really is OK to like both APIs.

I am looking forward to OpenCL as well, and I’m glad to see that they’re working hard to get it out. With OpenCL and compute shaders, we will have true cross-vendor methods for computation on the GPU.

Speaking of OpenCL, I am psyched at how it can target parallel architectures besides the GPU. :smiley:

What are everyone’s plans for this upcoming GPGPUness?