We’ve posted new versions of the OGL 2.0 white papers on our website. You can find them at:
Most papers now come in two flavors, one is just the text of the new version, the other contains the differences with the previous version. If you’re very familiar with the papers already, looking at the difference version will give you a quick glance at what changed.
These new versions of the white papers are a result of the tremendous amount of feedback we’ve gotten from many people, including folks that read this forum, ARB members, and ISVs. Thank you all!
“Programmable hardware revolution”
With NVIDIA, Intel, VIA ,SiS and Matrox still releasing NEW 3D hardware with fixed functionality it would be very useful to have a subset of OpenGL 2 that works on these cards. A lot of the OGL2 stuff can be implemented on those cards - the asynchronous OpenGL, the memory manadgement, the objects, the vertex shaders(in software), the pack processor(in software, with limited input formats). For the fragment shaders a small subset of the original spec can be used - with functionality more like the Quake3 shader language that supports the full OGL1.3. And the unpack processor can be limited to work only with a subset of predefined pixel formats and conversions. Many of the ARB extentions like the ARB_texture_env_combine can also be implemented in the subset fragment shader language. There are many other limitations but such crippled OGL2 would still be very usefull. I doubt we’ll see many integrated chipsets that can support the full OGL2 functionality in the next 2-3 years - except from the nForce family.
Something I really like about OpenGL 2.0 is that it has objects for just about everything, from textures to shaders, it makes it easy to wrap those stuff into my own classes. The API seams very uniform in every aspect, something that can’t be said about many extensions today. For instance fragment shaders uses an object model, while register combiners uses a state changing model. I had to objectify register combiners with a display list to get them to fit the object model I’m using.
I 100% agree, the sooner we start adjusting our engines and approach things the “new way” the better. The docs says “code should be valid ten years from now” which is the whole and core point of the programmability.
Good thing is that with these docs we can estimate the future and adjust the design-process accordingly.
i would like to have a bether frame-buffer access than now… meaning some (at least pseudo-)programability to generate own blending-functions, to set up stencil-stuff, depthbuffer etc… so we can store whatever we want in the buffers…
that way we could do a signed-add in the blending for example, or incrementing/decrementing the stencil depending on front/backface (stored in the s-value computable in the fragment-shaders…)
i did not read the new ones, but as far as i know that’ll be fixed for sure… thats bad… i know combining it with the fragmentshaders is not optimal for speed, but at least a blending-func to programable would be cool… (and that means, own alphatests, own blending, own depthtest, own stencil-ops etc…)
Originally posted by davepermen:
i did not read the new ones, but as far as i know that’ll be fixed for sure…
There are introduced new gl_FBColor, gl_FBDepth, gl_FBStencil (and others) variables available in Fragment Shader