OpenGL#

Hi

I want to comment on the recent presentations from nvidia I guess about the forthcoming OpenGL Longs Peak. Frankly, that sucks. Instead of having OpenGL 2.5 or 3.0, that should integrate more awsome plugins from giant IHV’s, we are getting a more complicated, api calls which are far away from what we used to have in the traditional OpenGL. Alas. They should call it OpenGL#, like what MS did to C, by creating their own ugly monster version of the standard prog language of electronic thinking machines. Anyway, lets focus on a real OpenGL sdk and more about intergrating the very usefull plugins, rather than immitating DirectX 10 which I guess it’s another # thing, so geeks will have no taste but being forced to sue the highlevel stuff.

Thanks.

Do you have any better arguments beside “I don’t like it”?

What do you mean with “plugins”? I’ve never heard of such a thing in the context of OpenGL.

And what do you mean with more complicated? From what I’ve seen the new API will be much simpler than the current, where we have many things that are miles away from how the hardware actually works.

Better inform yourself, and try to contribute something in a constructive way, instead of just shouting “that sucks” as loud as possible!

The only thing I’m trying to say is that the way openGL may approach the new hardware demands is not the best approach. It could be much better if we can forget about competition with other APIs, or the only one competitor. Now both APIs are converging to one point which may seem simple to the novice, learners, hobbiers and students, but it’s not what the professional wants. Based on my deep experience in the domain I can judge the new path. Beleive me on that, new hardware does not need more than what current OpenGL has to offer. Direct3D 10, has nothing more than geometry shaders, and adding more abstraction/high level layer which is redaundant. Changing function names and the way they are called has nothing to do with how the hardware works. The only thing that help now is the plugins. Yeah, I wanted to trick you in that, and if you could not figure what plugins means then I will tell you, it’s the extension thing. You could only ask “what do you mean?” and could imagine me shouting “that sucks.” Please read more about the hardware, and how things reall work. Here’s a good starting book:

Real-time rendering 2nd Ed.

The new hardware only offers more mem, more gpu horsepower, pixels per sec, tri’s per sec, and IHV extensions, plus shaders. The only thing that may affect how the hardware works relies in shader language itself.

Good luck!

They should call it OpenGL#, like what MS did to C, by creating their own ugly monster version of the standard prog language of electronic thinking machines.
Not to get too far off-topic here, but C and C# are programming languages. OpenGL is an API. Changing an API is not tantamount to changing a programming language.

Furthermore, calling C# an “ugly monster” is just flat out ignorant. You my not like the .NET CLR, but characterizing C# as such simply betrays a lack of understanding about its purpose and what it can do.

Instead of having OpenGL 2.5 or 3.0, that should integrate more awsome plugins from giant IHV’s,
Such as?

Please, enlighten us as to what would be a “proper” next iteration of OpenGL, and what these “awsome plugins” would be.

Also take into consideration the following very important design goals:

1: Ease of API implementation (Note: GL 2.1 is a hideous beast to implement, so suggesting extensions to it is a non-starter for this goal)
2: Keep the user on the fast path (once again, GL2.1 sucks badly at this. It has numerous ways of doing things, many of them incredibly non-performant).

Now both APIs are converging to one point which may seem simple to the novice, learners, hobbiers and students, but it’s not what the professional wants.
Um, what? This statement makes absolutely no sense.

Longs Peak is moving farther away from what the people you describe want, and much closer to what actual professional graphics developers want.

Ok then, good feedback from you all. So OpenGL 2.1 is a big beast that freaks out both implentors and devlopers. But does it freak out the 3D hardware? Here’s the Q. Those APIs should not be targeted to average programmers, or the consumers…They are for the 3d brains and geeks. Sorry but that’s the truth. Ok lets me give an example, Java. It’s an implentation in the end whether considered a language or an API. The main goal was to create a protable, simpler C++ that can be used by most programmers. It’s very successful and it’s a standard. I appreciate it myself, and would love to see it accelerated on the hardware. Nevermind. The poit is does it perform? “It keeps the user on the fast path,” and it’s implemented by only the developer, and now open sourced. However it serves its job, as a non-time critical application language mainly used for web things. Well well, what about an API the only reason it’s there is to ACCELERATE time-critical applications.

I hope you all got my point. And instead of offending my argument, you can suggest what can be made to enahnce the OpenGL 2.1 and leverage it to 2.5. I can add some suggestions like:

1 - Buffer swap control
2 - More queries of hardware caps
3 - Integrate some exisiting extensions which are considered cooooooooool.

Stay away of the unified object model, it’s more beautiful but less flexible, and remebr every feature requires its own object model for both performance and implentation wise.

Enjoy!

Those APIs should not be targeted to average programmers, or the consumers…
Once again, you bring out this “average programmer” nonsense, as though it were a valid point or something.

Show me where Longs Peak appeals to the “average programmer” over the “3d brains and geeks.” More importantly, show me where Longs Peak actually inhibits the “3d brains and geeks” from doing their job.

Well well, what about an API the only reason it’s there is to ACCELERATE time-critical applications.
What good is accelerating something if the implementation of it is total crap.

Like, say, ATi’s current implementation. It got so bad that they had to rewrite the whole thing.

The better your implementations, the more people are actually willing to trust the API enough to use it. This has long been something holding OpenGL back, and I’m glad that Longs Peak has put a priority on fixing it.

And instead of offending my argument, you can suggest what can be made to enahnce the OpenGL 2.1 and leverage it to 2.5.
OK, here’s my suggestion, since your “point” is so full of holes as to not even be close to viable: implement Longs Peak and kick 2.1 to the curb ASAP.

Stay away of the unified object model, it’s more beautiful but less flexible, and remebr every feature requires its own object model for both performance and implentation wise.
BS. Complete and total.

You’re going to have to offer at least some basic justification for that kind of nonsense. How is the object model “less flexible” and cause “every feature [to require] its own object model?”

Originally posted by glfreak:
Direct3D 10, has nothing more than geometry shaders, and adding more abstraction/high level layer which is redaundant.
Did you actually use D3D10 yet?

The initial statement coming from a ARB member (I think the secretary) was that GL was bloated and complicated.

It was time to redesign it. Time to make it a thin API. It would win back a few % of performance.
It would be easier to write drivers and debug them.

I want a better, faster API with solid drivers

"rather than immitating DirectX 10 "

So let it immitate. Big deal.

Based on my deep experience in the domain I can judge the new path.
Based on what you write here, your deep experience in the domain seems not so deep.

Do you actually know how the hardware works? Most concepts of OpenGL have to be emulated with shaders to make them work with current hardware. There’s no point in trying to accelerate something the hardware can’t do. Better concentrate on the things the hardware can do, and implement them properly.

Please read more about the hardware, and how things reall work.
You might want to do that yourself. I know how modern GPUs work. Do you?

Yeah, I wanted to trick you in that, and if you could not figure what plugins means then I will tell you, it’s the extension thing.
An extension and a plugin are two completely different things. OpenGL has extensions, no plugins. Keeping to confuse these terms just shows your lack of basic understanding about the OpenGL API, and certainly does not help your credibility.

People here are giving you a lot of arguments why we need Long Peaks, and why the old OpenGL is not optimal. When your experience in the domain is really so deep, you should not have problems finding real arguments for your point (and no, I don’t consider a comparison with a programming language a real argument).

Good to have a village idiot to take your frustrations out on.
I’m jealous it’s not me.

just be patient, your time will come :slight_smile:

Originally posted by Korval:
Like, say, ATi’s current implementation. It got so bad that they had to rewrite the whole thing.

I only read that they rewrote it for Win Vista starting with their Cat 7.1, then improved some more with 7.2

Off-topic, and the main focus was to offend my argument or my person…nevermind!

Why then has the traditional OpenGL been the optimal standard for 3D gfx, interms of API, design and as a thin hardware abstraction layer?

It has not been a few month and now you changed your views radically, compared to the minimal change in hardware. There’s a big change of course, as I mentioned before, but that’s limitted to resources, GPU speed, and shaders caps. Nothing has to really do with new HW. The way we pass geometry to the pipeline is either pushing or buffering vertices…is there anything new that I’m not aware of hardware geeks?
What about textures? framebuffers? is there any real change?

Why then has the traditional OpenGL been the optimal standard for 3D gfx, interms of API, design and as a thin hardware abstraction layer?
It has most certainly not.

It is the only cross-platform standard hardware abstraction graphics API. When you only have 1, you either use it or don’t do stuff with graphics.

Many people, including the stewards of the API, have pointed out the deficiencies of OpenGL as a “thin hardware abstraction layer.” OpenGL is not optimal for anything. It is functional, and it doesn’t even function very well if you’re, say, using an ATi implementation. Or an Intel one, for that matter.

It has not been a few month and now you changed your views radically, compared to the minimal change in hardware.
Who is this “you” you are talking to? I doubt most people here thought that OpenGL was a great API and didn’t need any changes.

glfreak, the OpenGL was developed as an API for CAD, for workstations. That is why they build such things in as matrix stacks, matrix modes, vertex types etc. All of it is meaningless now. Why do I need a matrox mode if I compute all my shading with programmable GPU? And why do I need such commands like “glVertex, glColor”, when all vertex data is interpreted by vertex shader anyway? The structure of the pipeline remains almost the same (vertex->primitive->fragment), but it has become much more configurable. Also, OpenGL become very “fat”, with overly complicated specification. For example, why the heck do we need the texture enviroment modes and combiners when we have shaders? Or the selection mode that is a performance killer anyway? Or do you like the current absolutely counterintuitive texture/query/whatever creation model with all that Gen*** and Bind*** horror. Do you imagine how difficult and messy it is to implement such thing in the driver? As the GL was designed, the workstations could have only one texture loaded at a time, later when the texture objects were introduced the model remained the same, for whatever reason. It is just plain impractical. The Longs Peak object model is very straightforward, easily extendable and flexible. It will make life easier for both developers and driver writers. I don’t think that Longs Peak is a “DX10 clone”, of course the functionality is similar (the hardware is the same), but the object model of LP looks much nicer (understandable names, fewer API entries, reusable templates). All in all I don’t get what are you complaining about.

Thanks Zengar, for your informative non-offending reply. That makes sense. But one Q, would it be easy to port excisting CAD software and other applications and games to the new LP? Is it backward compatible? If so, then how it’s easier to implement, we still have the same old beast calls around?

Thanks.

Originally posted by glfreak:
Frankly, that sucks.
Crying.

First of all, you don’t need to do any porting :slight_smile: Why would you? OpenGL 2.1 is and will be supported for quite a time now. LP is not going to be backwards compatible, it is a completely new API. It makes only sense to use it in new applications that will ship in following years. Well I hope that IHVs will son have working drivers. Besides, API designers mentioned a possibility to use both APIs at the same time (for example, render to a texture from LP and use it in regular GL). Look in the advanced forum, there were threads about it.

Originally posted by glfreak:
Thanks Zengar, for your informative non-offending reply. That makes sense. But one Q, would it be easy to port excisting CAD software and other applications and games to the new LP? Is it backward compatible? If so, then how it’s easier to implement, we still have the same old beast calls around?
Do you have any idea how large and complicated such programs are, sure it will be done eventually, but it will take time, lot’s of time, and what to do about applications that are in the end of it’s life cycle.
Even today i am preparing my current game engine (+ the new NeHe lessons) to be converted to LP, and i don’t know a whole lot more about LP than most people, mostly what is in the Pipeline newsletter, a few slide presentations, the discussions on this forum and the logical conclusions from my highly intelligent but messed up mind.

I don’t think LP will not be particularly backwards compatible (and i wouldn’t want it to be), unless you already designed the code specifically for that, this might be why gl 2.x will continue to exist for some time.
I think Longs Peak is more for new software development rather then for upgrading old ones

It is easier to implement because it has fewer ways of doing things, and coincidently fewer ways to write a bad implementation.
From my experience both here and on the gamedev.net forums the majority of problems people have with openGL is not singe bugs or getting openGL command in the wrong order, no it is where people designed the program in a bad way and therefore have painted themselves in a corner, i do think LP will help with these probelms.
(of cause people will still ask why display lists doesn’t work the same way and how to do bumpmapping with p-buffers).

I mean by backward compat. that with a Longs Peak driver, we can still run OpenGL 2.1 and older.

While we are exchanging info here about the future of GL, I would like to suggest if we can have the red, orange, and blue books open sourced…with the SDK, since GL is open. DX cmoes with a full guide for free. Lets have that plz. Can we convince the authors?