Might be Useful!

  1. What about having multiple matrix stacks like in D3DX. For example we can create matrix stack and then select or activate one.

  2. Ability to change texture addressing and filtering modes settings per texture environment rather than texture object.

  1. There are multiple matrix stacks in some deprecated extensions like ARB_vertex_blend, but now one should use programmable vertex shaders instead because they provide much more flexibility.

  2. Texture addressing and filtering modes belong to texture objects not to texture environments I think this is the right way, anyway again texture environments aren’t really used nowadays instead use programmable vertex/fragment shaders.

  1. Matrix stack are simply deprecated, whatever extensions extend them.

  2. I don’t think that glfreak thought about glTexEnv but just what a lot of people ask, separated image and sampler/filter objects. That would be really good feature beyond “texture”.

Exactly. Separate sampler states from texture objects so that we can set TMU states without the need of texture objects. Then we could get rid of texture objects, and specify the texture image as a separate sampler state, if that makes sense.

Deprecate matrix stacks UNLESS they are accelerated by HW somehow.

Then if everything gonna be shaders, that’s awesome! We just have a shading language and a mini-API to execute it.

Matrix stacks is never hw accelerated. That’s why it is in D3DX and not D3D. Changing a matrix is not a heavy operation.
Even computing a matrix is not a heavy operation. That’s why there isn’t a equivalent to glTranslate, glRotate, glScale, glOrtho, glFrustum in D3D, yet they are added to D3DX.
These are the kind of features that should be added to GLU.

I rather directly go towards using texture sampling shaders than using some intermediate form first.

Well then this creates another topic. If the graphics pipeline becomes fully shader oriented, then we dont need an API. We could write have the shader and executable code in the same place but using special compiler directives that instructs the system on which CPU/GPU or core to use.

Well, sort of, if we went down that path, we would only really need an API for loading and unloading buffers and shaders and things like that.
Like you said “using special compiler directives”, it’s not far from having an slimline api in where you can point to a block of data and state, this is a shader and this block is geometry and so on, now run those together and put the result here.

Tim Sweeney suggested that this is the likely future, that you have something like openCL and a thin layer of GL(basically for displaying the image) for all your graphic needs.

Not that it will ever appear directly but rather in a gradual way, and it starts with everything getting shaderized.

Speaking of the future…

There’s a new conference in town.

AMD, Intel and NVIDIA on board… exciting stuff.

My grandma could make more professional looking websites.

No, just the old Graphics Hardware conference rebranded, now marrying it with interactive raytracing (???). See link for details. Probably not enough folks paying to come to GH…

Slightly OT, but could someone please tell me what SIGGRAPH is doing to tick these people off enough to not just do their thing at SIGGRAPH, right across the street? This colocated conference (same time, same place) has very little going on that makes it a “must see”. I’ll be ponying up for SIGGRAPH, and if these few events were a part of SIGGRAPH, I’d go view their presentations. But hardly makes sense (to me) to shell out $300+ more for just a few paper sessions. Seems to me these guys are just reducing their audience and annoying other graphics folks that would otherwise check them out.