OpenGL 3 announced

Yes, the GF8 was out before DX10 was released to the public, but the API was designed well in advance and MS basically said ‘you will expose this, this and this to be DX10 compliant’, the hardware had to obey or risk problems (note the intial DX9 spec, which favoured ATI more than NV and gave birth to the R300).

So, I would say the base feature set is very much driver by MS, anything outside of DX10 just simply won’t be worth including on silicon (see the R600’s programmable tessalator, which while great and all, is practically a waste of silicon as no one can use it).

Originally posted by jkolb:
Sorry, to clarify: what besides being a cross platform architecture would be an incentive for someone to choose ogl over d3d? Will ogl3 feature things like hardware accelerated which d3d9/10 do not offer?
I think one should ask this question the other way around. What possible reason would there be for choosing D3D9/10 over OpenGL 2.x/3.0, given the later is exposes equal or more hardware functionality, and runs on all Windows platforms, and all other major desktop/workstation platforms.

The portability issue isn’t just a nicety, if you are serious about graphics quality and performance then the rest of the platform is important - how well does the OS handle multiple processors, or multiple graphics cards, how well does the file system perform? The bottom line is you really should be choosing the platform that provides the best overall capabilities for real-time graphics.

There are certainly better alternatives than Windows for doing multi-thread, multi-GPU work and file system intensive applications (think database paging), neither Windows XP and Vista are real contenders in this arena so you are shooting yourself in the foot by choosing an API that only runs under Windows.

Robert.

Originally posted by bobvodka:
Yes, the GF8 was out before DX10 was released to the public, but the API was designed well in advance and MS basically said ‘you will expose this, this and this to be DX10 compliant’.
it’s the other way around i think, MS has some knowledge of what future generations of hardware can do and what is currently planned and they write the specs accordingly.
Geometry shaders is certainly one of these things, i know that long before DX10 ATI had something cooking with R2VB and nvidia probably had similar plans, it was just one of these naturally evolving things (decided in a meeting held at a unusually long table within a mountain top fortress in the swiss alps).

Often there are things that DX just simply misses, there are certainly things the G80 can do and is exposed in openGL but not in DX(not that i can recall any at this moment).

So no microsoft does not exactly drive hardware, it just kinda evolves in the direction graphics demands.

note the intial DX9 spec, which favoured ATI more than NV and gave birth to the R300
You’re looking at it from the wrong direction. ATi was 6 months ahead of nVidia with R300. So Microsoft either had to expose R300 pretty much as it was, or nobody could use ATi’s hardware except GL programmers.

The same is true, to a degree, of DX10. Microsoft probably asked nVidia, “So, this G80 thing… what’s it going to do?” and then made an API for it.

if you are serious about graphics quality and performance then the rest of the platform is important - how well does the OS handle multiple processors, or multiple graphics cards, how well does the file system perform?
If you’re programming for yourself, or someone whom you expect to purchase whatever hardware and OS you tell them to, sure.

This, however, is something that very few people can do. Dictating hardware and OS is not something that most people using graphics APIs can actually accomplish.

To me, the main thing is crossing the DX9/Vista gap without having to code to a new API.

The extension mechanism has given us as GL programmers first dibs on many new hardware features over the years, leaving D3d to play catch up. Even now all dx10 features are available as vendor-specific GL extensions, plus some that simply don’t exist in dx10. It’s funny that someone’s under the impression it’s the other way round.
Also, you’ve got to remember that d3d is much slower for scenes of great complexity, such as engineering scenegraphs full of DCS’. To get reasonable performance from d3d all your data has to be static and/or dramatically pre-processed into the most batch optimal layout. In my business, that makes d3d a non-starter. I used to have 2 renderer implementations, GL and d3d9, but the d3d one performed so poorly with the same data as the GL implementation that it was never used and I stopped maintaining it.

On the glslang in/out issue:

There is one case to be made for it: Geometry shaders.

When a geometry shader is bound, the semantic concept of “varying” for vertex shaders is wrong. No longer is this value going to vary for the next shader in the pipeline. Semantically, the better value would be “output” from the vertex shader, and “input” to the geometry shader.

I’d be willing to bet that ATI is holding off on releasing a Tesselator extension until they can write it against GL3. R600 was released in the spring, so why would they write an extension that would be out of date in less than six months? I expect we’ll see the tesselator before we see '08.

Originally posted by Lindley:
[b] [quote]Originally posted by Korval:
[QUOTE]Stated otherwise, anything you can do in 3.0 you could do in 2.1.

You can do render-to-VBO directly in 2.1? [/b][/QUOTE]Yes, this can be accomplished via the NV_transform_feedback extension if your card supports it (e.g., NVIDIA’s 8800).

-Seth

I don’t think the hw is driven by DX.

If MS dictates features, I can image vendors complaining. "but it would cost too much. “But it would be too slow to be usable”

It’s necessary to design hw simulators and see how it would behave, estimate costs. Then doing an API is the easy part.

Originally posted by Robert Osfield:
What possible reason would there be for choosing D3D9/10 over OpenGL 2.x/3.0, given the later is exposes equal or more hardware functionality, and runs on all Windows platforms, and all other major desktop/workstation platforms.

For example better development tools (e.g. PIX, Nvidia PerfHUD, ATI PerfStudio & Shader Analyzer). Or better quality of drivers from “smaller” IHVs.

One problem about DX9 on XP vs DX10 on Vista was number of draw call per second since the driver is implemented in userspace in Vista (no limitation) vs kernel space in XP (CPU was stressed by even a low number of draw call, making programmers batching geometry very roughly).

So, I was wondering if OpenGL performance will be able to stay at the same order of “magnitude” on XP. As it seems a nifty feature to have XP support for DX10-class feature, it would be really bad if in the end driver space would limit it (that’s why Microsoft was reluctant to port D3D10 on XP I think).

However, I think it’s no problem since OpenGL batch commands and issues them rarely in order to reduce context switch, but if someone could confirm this…

However, I think it’s no problem since OpenGL batch commands and issues them rarely in order to reduce context switch, but if someone could confirm this…
GL runs in userspace and always has on Windows since Win 95.
It doesn’t mean you shouldn’t care at all. Keeping a low number of GL calls and state changes is always a good idea.

Using 1 context is best. That’s what GPUs were designed for. It’s the same issue with D3D : 1 context is best.

I believe xen2 meant ‘context’ as in ‘user to kernel mode switch’, which is less a GPU problem and more a CPU/Kernel ‘problem’.

From the end of the BOF.pdf presentation:

OpenGL Longs Peak Reloaded

  • might contain:
  • Attribute index offsetting
  • Compiled shader readback for caching
  • CopySubBuffer to copy data between buffer objects
  • Frequency dividers
  • Display list like functionality
  • 2-3 months after OpenGL 3
    What’s attribute index offsetting?

How mighty is the emphasis on “might”? :wink:

P.S. Reloaded: A peak enshrouded in clouds and mystery, located somewhere between Longs and Evans.

What’s attribute index offsetting?
It’s that thing that everybody’s been asking for but the ARB has been somewhat reluctant to implement for reasons that they have yet to explain. It allows you to apply an offset to each attribute index when using a DrawElements call.

I may have missed this, or it may be blazingly obvious and I’ve just overlooked it…

But the transformation stack that OpenGL implements, is this considered a legacy feature, and will it be removed from OpenGL LP/ME?

If it is to remain (which I’m assuming not?), how would this be exposed with the current object model?

There is no matrix stack in Longs Peak (GL3).

Originally posted by Smokey:
If it is to remain (which I’m assuming not?), how would this be exposed with the current object model?
the matrix stack has nothing to do with the object model.

you can simply implement your own matrix stack, which i think should be very easy…

If it is to remain (which I’m assuming not?), how would this be exposed with the current object model?
This is not a separate feature in the new object model. The matrices are just another uniform variable in the shader.

Maintains backward compatibility within the shader code while neatly shifting the responsibility to the developer… nice touch, and it’s one less thing for the driver guys to worry about.