GL Future?

Don’t worry, Glfreak, I’ve nothing but love and respect for you.

Anyway, on the brighter side of life…

  • GL3 headers have been updated in the registry
  • OpenGL 3.1 drivers have been released
  • OpenCL drivers have been released

And on the Linux front things have never looked better

  • Driver installation is a snap (with F10 anyway)
  • Creating a GL3 context with X is easy!

Heck it’s high time to throw a few shrimp on the barbie and pop open a cold brewski…

Which IHVs have released full implementation of GL 3.1 so far other than the NVIDIA?

If ripping 75% of the 2.1 features and tagged them under deprecated extension makes GL streamlined API and driver implementation is alot easier and reliable, where is the other IHVs from that?

It’s not troll, Khronos :), it’s just you fucked up the GL spec, and you still dont have core GS for god sake.

:smiley:

If ripping 75% of the 2.1 features and tagged them under deprecated extension makes GL streamlined API and driver implementation is alot easier and reliable

Once the ARB abandoned Longs Peak, they also abandoned making GL implementations more reliable via the specification. You are expecting results that the ARB no longer intended to provide.

Interesting to know this.

Again I’m not trolling as some may say, but I’m trying to understand what is going on that they abandoned all the Longs Peak or whatever they called, and came up with an awkward spec. that focuses on deprecation, giving the excuse that implementation will be easier and more reliable, while we don’t have GL 3.1 except from NVIDIA, and which has some bugs as some reported.

And now someone please tell me why on earth version 3.1 does not include geometry shader as core functionality?

And now someone please tell me why on earth version 3.1 does not include geometry shader as core functionality?

I’ll wager GS (or its equivalent) makes its core debut in 3.2.

Then maybe that’s just wishful thinking…

As I understand, GS is an uncertain feature yet to be a core functionality?

If not, why?

thanks.

Do any of the IHVs like Geometry Shaders? Maybe they intend to remove Geometry Shaders in the future. OpenGL does not need yet another core feature that is not future-compatible.

I’m more curious to know why IHVs don’t lie geometry shaders? Then why did they created it in the first place?

Some find GS is rarely useful for them. Others like to stick to DX9-class hardware. Others didn’t like the synthesis performance of the first cards with GS.
With time, this can change.

Btw, what’s up with depth clamp?

I was just dotting my i’s and crossing my t’s in a little 3.1 state preparation extravaganza and I noticed it’s not core yet. Could have sworn it was.

What gives?

:slight_smile:

Then why did they created it in the first place?

ATI once created an extension specifically to do vertex weighting. NVIDIA once created a different extension to do a much more limited form of vertex weighting. Both of these were made obsolete by vertex shaders.

Why did they create these extensions, and the hardware behind them? Because it sounded like a good idea at the time.

Geometry shaders are a perfect extension. They expose hardware-based features. But when the day comes that there is a better way to get at similar features (tessellation shaders. Pre T&L > post T&L), the extension can be dropped. If it was core, it could not easily be dropped without going through the deprecation process.

Core features should be things that the ARB is certain about being future-proof.

New generation of GPUs needed a new feature to provide incentive for folks to upgrade? Problem: 1st gen performance. If developers don’t code for it, user’s aren’t compelled to upgrade to get it.

Wonder if it’ll go the way of the dodo once the tessellator-capable GPUs take hold (starting ~4Q09-1Q10). That’s gonna be cool. Good precedent with consoles.

I think GS has a niche with point sprites, or? There’s probably a few other techniques where it’s the preferred approach… A lot of developers were just expecting something else than it really turned out to be.

Another technique that is boosted by GS maybe is edge-related stuff, generally in NPR rendering.

I think it will make it into core someday, if it’s replaced it will probably be replaced by a more generalized way of using shaders, like perhaps a programmable pipeline or something.

It is useful though for point sprites, impostors, fins, leaves, vegetation, volumetric->vertex conversion, shadows and a bunch of other stuff we haven’t even thought of.

I think the reason it’s delayed is that they have to see what a possible future tessellation shader will look like as they might interact a bit.

Let’s not forget about render-to-cubemap, stencil shadow volume generation and of course the wonderful lines you can draw with a GS :slight_smile:

If future GL tessellation works the way it will in d3d11 it’ll be an addition to not a replacement for the geo stage. Tessellation itself is actually still canned in d3d11, sandwiched between 2 new programmable stages responsible for transforming control points and evaluating patch surfaces. Don’t see this as a geometric amplification/decimation/manipulation/synthesis panacea per se but it is mighty cool.

Of course there’s always compute interop and the many-core storm that’s brewing over the horizon…

Hmmm so the ARB do not see it any worthy feature right now?

Or they think it will be dropped by HW in the future…then it’s not a problem to add it now to the core API because they got the deprecation mechanism :wink:

Besides I asked about geometry instancing long time ago, and why it’s not part of GL core, since Direct3D already had it, and the answer was very strong that GL does not need this feature but D3D because the later has drawing call overhead. And now we see it part of the latest core GL spec. You may say that ARB later on realized its potential and decided to add it to the core. I agree, but not anymore since they have the deprecation model :slight_smile:

Don’t get me wrong dudes, I’m just trying to reason things…
GL has been my fav API and still, until I had to switch to D3D because of some HW lack of proper support. However, when one of the biggest companies in the CAD industries start thinking of an alternative…I feel frustrated.

> Hmmm so the ARB do not see it any worthy feature right now?

Right now it’s anybody’s guess what the ARB thinks. However, as it seems ATI has seen fit to include GS among the new features of its 9.6b drivers, there’s a fair to midland chance that it’ll make it to the core in 3.2.

Lately I’m again wondering what’s on the plate for GLSL in the coming days. Cg has interfaces and DX11 has added classes, obvious portents of shading languages becoming a bit more object oriented. Seems to me there are some welcome perf implications here in addition to a significant convenience, stuff beyond the mere syntactic sugar one might be tempted to thumb one’s nose at.

What are the “welcome” performance implications of object-oriented languages? Do they not perform at the same speed or slower than other languages?

The goodness with the object oriented approach is (at least) twofold.

  1. Combat the combinatorial explosion created by lots of e.g. light/material combos automagically. This is the “convenience” bit, which is considerable.

  2. An efficient linker can inline and optimize interface code (at the source level) on the fly. This is the perf bit.

Now couple these with true multithreaded operation (read: compilation) and you’ve got yourself a real barn burner IMHO.

Convenience aside, the underlying assumption here is that it’s still better perf wise to special case code where possible, despite recent improvements in the handing of ubershaders.