OpenGL#

I mean by backward compat. that with a Longs Peak driver, we can still run OpenGL 2.1 and older.
That’s entirely up to the implementer. If they wish to support 2.1 (and I see no reason why they wouldn’t, considering how much software runs on it), then they will. What won’t happen is there won’t be any more 2.1 extensions outside of what is already there; 2.1 will thus stall exactly where it is, and any/all new features will be based on Longs Peak only.

You know, we’ve had a number of discussions on these subjects in the advanced forum for the past month or so. You should read them.

Originally posted by Zengar:
That is why they build such things in as matrix stacks, matrix modes, vertex types etc. All of it is meaningless now.
disagree.

Originally posted by Zengar:
Why do I need a matrix mode if I compute all my shading with programmable GPU? And why do I need such commands like “glVertex, glColor”, when all vertex data is interpreted by vertex shader anyway?
very easy: because

  1. there are still many applications- probably not in gaming, but for instance in CAD/CAE- for which the fixed pipeline works pretty good, and why should developers spend time on implementing shaders when they only need fixed pipeline?

  2. i have to make my apps run on older (3 years+) hardware, which does not support shaders, as well as on new hardware, which does. if i use fixed pipeline on all hardware, i have to maintain less code.

  1. Yes, I agree, it is a good point for CAD. Still, implementing a shader is trivial (and there will be layered libraries that emulate fixed-function without doubt). Yet I don’t want such things in a graphics API, this belongs to a higher-level library. A graphics API should expose what hardware can do, not more and no less.

  2. I assume that LP targets only never hardware.

  1. I assume that LP targets only never hardware.
    It should support older hardware, too. But I assume this means only hardware capable of OpenGL 2.0 (or at least GLSL).

personally, i do not care if there is a glSetupFixedPipeline() in libGL or gluSetupFixedPipeline() in libGLU. i just wanted to state that i still see a need for a fixed pipeline.

and concerning glBegin, glVertex, glColor etc.- these are nice helpers if you start writing code from crap. if for some reason you need to draw only a few lines, circles, maybe a quad or a cylinder (which i really needed lately to cast a simple shadow), it is much more intuitive and faster to program- at least to me- to use glVertex, compared to using a vertex array.

maybe that is only my personal opinion, but when i start programming something new, the first thing i want is quick progress in coding.

there are still many applications- probably not in gaming, but for instance in CAD/CAE- for which the fixed pipeline works pretty good, and why should developers spend time on implementing shaders when they only need fixed pipeline?
Then they should spend the time to write an appropriate wrapper.

After all, game developers don’t have random bits of GL code wandering around in their engine; it is appropriately cordoned off in the section entitled “Rendering system”.

Yes, immediate mode is useful. But it is also something you could write on top of Longs Peak, so there’s no point in making every implementer write it for you.

It should support older hardware, too. But I assume this means only hardware capable of OpenGL 2.0 (or at least GLSL).
Longs Peak should be able to run on any DX9-capable hardware.

Originally posted by Zengar:
[b] 1. Yes, I agree, it is a good point for CAD. Still, implementing a shader is trivial (and there will be layered libraries that emulate fixed-function without doubt). Yet I don’t want such things in a graphics API, this belongs to a higher-level library. A graphics API should expose what hardware can do, not more and no less.

  1. I assume that LP targets only never hardware. [/b]
    Not only is implementing shaders trivial, but the rendering system in CAD is simple. They just dump everything to screen and probably don’t even bother with culling.

CAD = Stagnating software that never gets updated?

Good feedback from all of you guys. Thanks.

LP should be called LP, and not OpenGL, if those drastical changes are going to really happen.

Don’t forget that one of the goals of OpenGL is to have a stable API design, extendable, but at the same time preserves its core and backward compat.

Regarding ATI’s impl., it’s pretty obvious because of the implementors failed to provide decent drivers, and that has nothing to do with the API design, even if it’s a bit hard to impl.

Imagine IHV’s have to implement the Direct3D library, they only provide thin basic communcating layer that D3D uses to access the hardware, hence D3D is layered.

LP should be called LP, and not OpenGL, if those drastical changes are going to really happen.
Why? Just because you change how the API works doesn’t mean that it’s something new. OpenGL ES isn’t backwards compatible with OpenGL, but they use the name.

Or is it merely your personal dislike of the LP API showing through?

Don’t forget that one of the goals of OpenGL is to have a stable API design, extendable, but at the same time preserves its core and backward compat.
The goals of OpenGL are exactly and only what the ARB decide they are. Furthermore, if potential users of the API do not value said goals, then an API that fulfills them is fairly worthless, and writing such a specification is a waste of everyone’s time.

Regarding ATI’s impl., it’s pretty obvious because of the implementors failed to provide decent drivers, and that has nothing to do with the API design, even if it’s a bit hard to impl.
Well, let’s think about this.

There are, basically, 3 major hardware implementations of OpenGL: nVidia, ATi, and Intel. Of these, only one of them is something that someone should trust critical code with.

You could say that Intel and ATi coders just suck. However, these are very large companies, and many of their other software products don’t suck. Intel in particular has demonstrated the ability to implement far more complex specifications (C/C++ compilers), and do so successfully. This suggests a more likely alternative: OpenGL 2.1 is a needlessly difficult to implement specification.

Furthermore, saying that it is an easy API to implement is false just from a study of the spec. It has a bunch of features (selection, display lists with non-geometry stuff, etc) that few if anyone ever actually uses, but requires huge quantities of effort to implement. It has multiple overlapping and conflicting APIs for doing the exact same thing, which also requires loads of effort to implement, particularly when the hardware only has one way to do something.

Originally posted by glfreak:
Don’t forget that one of the goals of OpenGL is to have a stable API design, extendable, but at the same time preserves its core and backward compat.[/QB]
It has other goals too : high performance, cross platform. Yes, LP will be quite different.

Originally posted by glfreak:
Imagine IHV’s have to implement the Direct3D library, they only provide thin basic communcating layer that D3D uses to access the hardware, hence D3D is layered. [/QB]
LP will be the lower layer which I would prefer to use. It’s possible to write a GL 2.5 layer on top of this.

Bottom line : you don’t make much sense.
Most people would kill to get more performance and stability.
I don’t know what you are working on but whether its a CAD or game, going from GL 2.1 to LP shouldn’t be painful.

If you have pre VBO code, pre FBO code, then it can be painful.

Originally posted by V-man:
[b]Not only is implementing shaders trivial, but the rendering system in CAD is simple. They just dump everything to screen and probably don’t even bother with culling.

CAD = Stagnating software that never gets updated? [/b]
Not only is rearchitecting a fixed function application to use a shader NOT trivial, but modern CAD rendering is anything BUT simple. The number of things that are probably being dropped from GLSL for LP/ME will only make that transition that much more difficult.

Not only is rearchitecting a fixed function application to use a shader NOT trivial, but modern CAD rendering is anything BUT simple.
Not to sound too much like a jerk, but that’s “your” fault for not writing your code better. And by “your”, I mean whoever wrote the original code. And if that’s not you specifically, I’m sorry, but them’s the breaks. Sometimes you get a decently written codebase, and sometimes you get crap. But OpenGL shouldn’t stand still just because you got stuck with a crappy codebase.

Modern games have infinitely more complex rendering pipelines than CAD programs, yet they can handle rendering in either OpenGL or D3D at the flick of an option switch. Which is, I assure you, far more difficult than switching code from fixed-function to LP.

Plus, you don’t need to actually rearchitect anything. Just write a “simple” layer that performs the appropriate GL 2.1 stuff. That is, write a mini-GL 2.1 implementation that implements only the functions you need. Writing a shader to do the texture environment-style blending isn’t entirely trivial (for multiple textures), but it’s far from onerous.

There’s a word for this. It’s called “refactoring.”

The number of things that are probably being dropped from GLSL for LP/ME will only make that transition that much more difficult.
What is being “dropped” from glslang? Things are being dropped around glslang, yes, but no features of glslang are being removed.

tranders, you seem quite lazy to me. What graphics related tasks are you doing on a day to day basis? You seem afraid of even touching your own code.
Personally, I rarely touch OpenGL. I have it encapsulated in a device class. To change over to any other drawing API would be, frankly, trivial.
Maybe you should hire me?

We are getting off-topic by more commenting and over criticising others.

Then would PS3 and PSP, or any other console, be ready to re-create their OpenGL based SDKs to meet LP?

Who suggested LP by the way? If not 3DLabs, then sorry I cannot trust that.

Your being considered a jerk for accusing someone of writing bad code without any idea of the kind of code that person writes speaks for itself.

We are getting off-topic by more commenting and over criticising others.
Considering some of the more ludicrous and/or ignorant claims on this thread, criticism should be expected.

Then would PS3 and PSP, or any other console, be ready to re-create their OpenGL based SDKs to meet LP?
1: Why would they need to? It’s not like new GL features would matter to them. Hell, PSP can’t even support half of OpenGL as it is.

2: It’d probably be a good idea for them, as it’s more efficient for their hardware.

Who suggested LP by the way? If not 3DLabs, then sorry I cannot trust that.
3D Labs doesn’t matter anymore; they don’t have a say in OpenGL since they’re not making OpenGL parts. They exist, but they make OpenGL ES parts now.

Further, pretty much everyone suggested Longs Peak. Not necessarily the exact results, but the general idea of a revamped API that served everyone better. And pretty much everyone supports the “backwards compatibility be damned” aspects of this design.

Your being considered a jerk for accusing someone of writing bad code without any idea of the kind of code that person writes speaks for itself.
Well, let’s look at the situation.

In terms of the specific complaint, good code would be code where the rendering is properly abstracted from the rest of the CAD program, such that changing, perhaps radically, how the rendering works would not require substantial modifications to the code that uses the renderer. Bad code would be code that does not provide such an abstraction, or said abstraction is not sufficiently abstract.

(note: I’m not saying that this is the definition of “good code”. I’m only using this for the purposes of this discussion)

Now, if you’re complaining about Longs Peak having an API that makes fixed-function style programming more difficult. If you’re working with good code, you have no right to complain. Just do your job, take a couple of weeks (outside estimation) and write what you need to make everything happy with Longs Peak. If you’re working with bad code, someone wrote bad code. As I said, them’s the breaks; you’re having to maintain bad code, and that’s unfortunate. Believe me, I feel your pain, as I’m taking care of some hideous code at the moment myself.

However, instead of complaining about how you’re getting the short end of the stick, you could use this as an opportunity to turn that bad code into good code. Since you’re already going to have to change how rendering works drastically, you may as well make it into a proper abstraction. So that the next time the graphics rendering system changes substantially (say, when raytracing comes along), you (or whomever is maintaining the code) will be set.

Alternatively, you could simply not use Longs Peak. After all, nobody’s making you. The 2.1 implementation will still be there for many years (or, at least, 2-5 years). Just like a developer did not have to switch from D3D 5 to D3D 6 just because a new version of DirectX came out.

But if you find yourself needing new features, well, tough. You’re in the 1-5% of the OpenGL using population for whom using Longs Peak offers few incentives. Somebody’s got to be, and it just happened to include you.

Korval, please use this library OpenRL, which means Open Respect Library. LoL.

Hey, ever yawn so wide you thought you might swallow your own head?

Originally posted by knackered:
tranders, you seem quite lazy to me.
If the desire to NOT support multiple APIs with a myriad of extensions is considered “lazy” then you hit the nail on the head – but that’s exactly what LP is – the desire to drop support for multiple APIs with its myriad of extensions. So in effect you are saying that the graphics card vendors (and the ARB by extension (pun intended)) are lazy – I’m sure they will appreciate that analysis.

If this inference is not accurate, then accept my apologies for agreeing with the OP.

Originally posted by tranders:
[b] [quote]Originally posted by knackered:
tranders, you seem quite lazy to me.
If the desire to NOT support multiple APIs with a myriad of extensions is considered “lazy” then you hit the nail on the head – but that’s exactly what LP is – the desire to drop support for multiple APIs with its myriad of extensions. So in effect you are saying that the graphics card vendors (and the ARB by extension (pun intended)) are lazy – I’m sure they will appreciate that analysis.

If this inference is not accurate, then accept my apologies for agreeing with the OP. [/b][/QUOTE]I disagree. The ARB (which now transfered power to the Khronos group), is moving quicker.

Back in the day of DX8, MS quickly added support for shaders while the ARB was divided.
NV made their own register combiner and ATI made their shaders. It was hell. The ARB was slow at making ARB extensions.

For the first time in history, the ARB is acting like a single entity. Even ATI did some major cleaning of their GLSL code.

“myriad of extensions”

I doubt it. You are probably coding against GL 1.1 and treating higher level functions as extensions. Are you using VBOs?

CAD is simple. No scene support. No lightmaps. No special lighting features. No culling (Octree or whatever), no character animation, no stencil shadows, no shadow maps, no bump maps, no glow, no lens flares, no HDR.
When you are designing an engine, a solar car or PCB, you don’t need those things.