OpenGL 1.4

What’s going on here? The news media are trumpeting the ‘availability of the OpenGL1.4 spec’. I’d like to have a look. But when I look here: http://www.opengl.org/developers/documentation/specs.html

… well. Nothing!
Where is it?

That’s right that specifications are ready, but that doesn’t mean they are published yet. It’s just a matter of time ; please be patient. Anyway you won’t be able to use those specifications immediately since you will have to wait that manufacturers make drivers for it.

Originally posted by vincoof:
That’s right that specifications are ready, but that doesn’t mean they are published yet. It’s just a matter of time ; please be patient. Anyway you won’t be able to use those specifications immediately since you will have to wait that manufacturers make drivers for it.

Sure, I know how that works. I want to see it regardless, just to know what’s in store

And what about people like Brian Paul, who’d want to start work on new implementations right away? Or even IHVs? Do these people get some kind of private access? I always thought the spec was public for everyone, once it’s final.

The ARB ( as you probably know ) consists of several of the leading IHVs. So, I would assume they all have the various specs ( otherwise, how can they vote on them ? ). I’m pretty sure most of it has already been implemented and just needs to be enabled.

I’m looking forward to the ARB_vertex_program spec. I’m guessing it’s as powerful as DX’s VS2.0 but in a way that allows it to be implemented on older generation hardware ( likely via an OPTION mechanism, like NV_vertex_program1_1 ). The rest of GL1.4 is pretty much available right now ( ARB_shadow, etc ).

Originally posted by PH:
The ARB ( as you probably know ) consists of several of the leading IHVs. So, I would assume they all have the various specs ( otherwise, how can they vote on them ? ). I’m pretty sure most of it has already been implemented and just needs to be enabled.
I’m pretty sure that SiS, Trident and the Mesa guys are not currently ARB members

I’m looking forward to the ARB_vertex_program spec. I’m guessing it’s as powerful as DX’s VS2.0 but in a way that allows it to be implemented on older generation hardware ( likely via an OPTION mechanism, like NV_vertex_program1_1 ). The rest of GL1.4 is pretty much available right now ( ARB_shadow, etc ).
And I’m looking forward to the whole 1.4 thing
I’d just like to know what exactly it is. So gimme the spec. Hurry!

Originally posted by zeckensack:
I’m pretty sure that SiS, Trident and the Mesa guys are not currently ARB members

I don’t know about those but they are hardly the leading IHVs .

I don’t think the specs are big secrets and if you sign some sort of ARB agreement you’ll have access to a lot more ( ARB meetings, mailing lists, etc ).

Originally posted by pocketmoon:
[b]and another new toy to go with it, hopefully OpenGL1.4 compliant

http://www.anandtech.com/video/showdoc.html?i=1656

[/b]

Wow, 110 million transistors on a .15 micron. They don’t say how much power the card will consume, but it’s gone blow the AGP port! Look at the section that does 2D operations. It is tiny compared to the 3D sections.

V-man

ARB_vertex_program spec has been approved and posted ,
http://oss.sgi.com/projects/ogl-sample/registry/ARB/vertex_program.txt

Hmm, quickly browsing the spec indicates that it’s similar to my own VP language ( that currently compiles to EXT_vertex_shader ).

My library will likely be open source when complete - it provides a unified vertex programming API and a unified interface to vertex arrays. Guess I’ve been wasting my time with the release of ARB_vertex_program .

My vertex programs look like this,

/*

Vertex Program Test

*/

//-------------------------------------------------------

//
// Varying data ( input to fragment program )
//

varying vec4 TangentSpaceLightVector(oTex0);
varying vec4 NormalmapCoords(oTex1);
varying vec4 AttenuationMapCoords(oTex2);
varying vec4 DiffusemapCoords(oTex3);
varying vec4 TangentSpaceEyeVector(oTex4);

//
// Per-vertex data ( input from application )
//

attribute vec4 VertexPosition(vPos);
attribute vec4 BaseTexCoords(vTex1);
attribute vec4 Tangent;
attribute vec4 Binormal;
attribute vec4 Normal;

//
// Per-primitive data
//

uniform vec4 LightPosition;
uniform vec4 EyePosition;
uniform vec4 RangeScale;

//
// Local Constants
//

const vec4 Constants = { 0.5 0.5 0.5 0.5 };

//
// Temps
//

vec4 LightVector;
vec4 EyeVector;

//-------------------------------------------------------

SUB LightVector.xyz, LightPosition, VertexPosition;
SUB EyeVector.xyz, EyePosition, VertexPosition;

//
// 3D Attenuation
//

MAD AttenuationMapCoords.xyz, -LightVector, RangeScale.x, Constants.x;

MOV NormalmapCoords.xy, BaseTexCoords;
MOV DiffusemapCoords.xy, BaseTexCoords;

//
// Transform light vector into tangent space
//

DP3 TangentSpaceLightVector.x, Tangent, LightVector;
DP3 TangentSpaceLightVector.y, Binormal, LightVector;
DP3 TangentSpaceLightVector.z, Normal, LightVector;

//
// Transform eye vector into tangent space
//

DP3 TangentSpaceEyeVector.x, Tangent, EyeVector;
DP3 TangentSpaceEyeVector.y, Binormal, EyeVector;
DP3 TangentSpaceEyeVector.z, Normal, EyeVector;

//-------------------------------------------------------

Conceptually, what the extension defines is an application-defined program (admittedly limited by its sequential execution model) for processing vertices so the “vertex program” term is more accurate.

actually its a userdefined inlined vertexprocessing callback function. not a program, not a shader at least they finally got it together, happy

Man thats the biggest extension spec I’ve ever seen!

Christ on a bike!
How big’s the pixel shader spec gonna be?!

I’m glad to see the large amount of people that have contributed to it. This is what I think the old spirit of OpenGL was: clever people working together, using their minds to do the right things.
Have you noted that this is also the EXT_stencil_two_side? Maybe IHV have noticed that it is not the fact that they have different extensions for more or less the same thing but the fact that they implement it efficiently in their HW?
It is good news to see that the spec exists I hope that MS claims are nothing more than mongering and nothing more and we can see it implemented, soon, in current HW.
Personally, I have no problem if the fragment shader spec is too long . I just want to have it sooner than later.
And what about a common vertex array extension to remove the NV_VAR and the ATI_VAO code?
I think that the board (ARB) is doing a good job. It will be better if they are faster (there are some vertex program near two years ago but…)
Congratulations.

You may be right Cab - since VAO isn’t compatible ( unless the spec is updated ) with ARB_vertex_program then perhaps a common extension will follow.

for the common vertex array extensions i’ll suggest directly dive into GL_GL2_exts… i think there aren’t big problems in implementing those on todays hw. just vertex and pixelshaders of today are crap. but the r300 and nv30 vertexshaders could yet do a great job for most of the GL_GL2_vertex_shaders i think… not complete full support possibly, but at least they could yet fit the interface… it would just be too sweet…

oh, and btw, i really like the named registers… much more handy than r1 //something, r2 //something else etc…

espencially for big shaders… (and they are comming )

i just hope they expose fragmentshaders only for r300/nv30+ hw, else the ext gets quite old (sort of ps1.3) and we have to wait for another ext for the new hw… (ps2.0)…

oh, and yes, that thing is HUGE (107pages to print out )

Originally posted by PH:
You may be right Cab - since VAO isn’t compatible ( unless the spec is updated ) with ARB_vertex_program then perhaps a common extension will follow.

I hope you are right. After some years with some GPUs in the market and a functionality that is exactly the same for all IHV, and with OGL 1.4 defined, I think it is time to have one.
I like GL2 Vertex Array Objects, with direct pointers (this is something that the ATI extension should have), and I think that all IHV can implement it easily as it is near the same as D3D Vertex Buffers, so everyone has it currently implemented in their drivers.
Personally, I think NV_VAR is more flexible but GL2 VAO will be enough.
Maybe a poll in www.opengl.org can help

[This message has been edited by Cab (edited 07-23-2002).]

Originally posted by Cab:

I agree with Matt who pointed out that direct pointers are a bad idea:

If the driver wants to hide implementtation details (e.g. internal formats), the driver has to copy for a lock, in addition to the copy operation of the application to write the data to the pointer -> one additional copy operation -> slower.

Originally posted by folker:
[b] I agree with Matt who pointed out that direct pointers are a bad idea:

If the driver wants to hide implementtation details (e.g. internal formats), the driver has to copy for a lock, in addition to the copy operation of the application to write the data to the pointer -> one additional copy operation -> slower.[/b]

I don’t agree with that. The driver writer can implement it as he wants. And it will be more or less efficient depending on his skills and his hw possibilities. I like the way you can do it in D3D. I wrote an application when D3D8 was released to test it and the speed was more or less the same that with VAR. But since that, it seems nVidia have optimized their drivers and now is running more than double the speed of the NV VAR version. Why? Because it was a single demo where most of the geometry are static objects and it seems that now the D3D driver is storing most of them on the card memory and using AGP for the dynamic VBs.
This is something you can’t do with VAR as you can only have one buffer and you will probably allocate it in AGP memory (unless you don’t have any dynamic object…).
It seems that Matt is commenting negatively any idea that will come from OGL2. I suppose that it is because it comes from 3DLabs Maybe, as an ARB meetings attendee, he can suggest to lead a common approach for this problem instead of complaining about others ideas.
Anyway, I don’t want polemics. I just want a common extension to submit the vertex data efficiently to the GPU. Don’t you?

Originally posted by Cab:

When using the pointer semantics, there is excatly one copy operation, and the driver can perform all required format conversions (including swizzling in case of textures) during this copy operation. When using lock semantics, usually first the application has to copy the data into the locked buffer, and then often the driver has to copy again due to format conversions.

For example, when implementing video textures with DirectShow, each video frame is copied twice in d3d and once in OpenGL.

And to really get best performance, you never should touch your data again anyway. For special cases you cannot avoid dynamic (e.g. video textures). But especially for vertex data, you shouldn’t touch vertex data. And then the pointer mechanism is very elegant and abstracts the hardware.

Originally posted by folker:
[b] When using the pointer semantics, there is excatly one copy operation, and the driver can perform all required format conversions (including swizzling in case of textures) during this copy operation. When using lock semantics, usually first the application has to copy the data into the locked buffer, and then often the driver has to copy again due to format conversions.

For example, when implementing video textures with DirectShow, each video frame is copied twice in d3d (except if you don’t use texture swizzling, and if the video frame format is supported directly by the hardware, then you also get only one copy) and always only once in OpenGL.

And to really get best performance, you never should touch your data again anyway. For special cases you cannot avoid dynamic (e.g. video textures). But especially for vertex data, you shouldn’t touch vertex data. And then the pointer mechanism is very elegant and abstracts the hardware.[/b]

Originally posted by folker:

This is in the case that you read from disk and want OGL to transfer it to the card/agp as it is. But in the case that you are creating or modifying dynamically you geometry from the mesh that you have read, the lack of an AGP pointer where you are going to store your data sequentially, means that you need to store it in a main memory modified for later left the driver to copy it to AGP memory. This means 1 copies (you transforming it and storing to AGP memory) vs 2 copies (transforming it and storing it in sys memory and then the driver copying to AGP memory).
The problem that you expose is usually for static models that are loaded just one time at the beginning of the program/game level/… So the double copy is not a big problem. But the case I’m exposing is for dynamic geometry that is copied every frame.

Anyway, without VAR or VAO extensions, the current OGL model is the worst as it has all the models in sys mem and has to copy the vertex every time you want to draw a model. This is something you probably know and this is why VAR and VAO extensions are here. So I think we need a common extension/mechanism to solve it instead different extensions for the same think. Don’t you think so?