Some OpenGL improvement suggestions

Here are some OpenGL improvement suggestions I thought of. Many come from RenderMan. I know that OpenGL is not designed to be a real-time RenderMan but some ideas could drive OpenGL development towards very interesting concepts. Anyway, most people are trying to use OpenGL as a kind of real-time RenderMan.

New shader types

Shaders are great because they allow to redefine what the fixed function pipeline was previously doing. But for me, the current way of programming is not the most convenient. Different BRDF? Different shader. That is OK. But same BRDF with different number of textures, number of lights, light type, fog,… All need a different shader (or one big and slow shader). For that, the fixed function pipeline was more convenient.

I suggest to create new types of shaders to subdivide processing and allow a more modular shader design. Example of new types:

Light source shaders:
Defines how a light illuminates surfaces: e.g. directional light, spot light, point light, area light, projected texture,…

Surface shaders:
Defines how surface reacts to incoming light: e.g. phong, blinn, celshading, anisotropic,… Several shaders could be combined for a single surface for defining multi-layered materials.

Filtering shaders:
Defines how texture pixels are used when a texture lookup is done: e.g. nearest neighbor, linear, cubic, Catmull-Rom, Lanczos,…

Rasterization shaders:
Defines how pixels are drawn to the frame buffer: e.g. eye-space fog, full-screen effects,…

These are just examples on how we could modularize shaders. By the way, this does not yet solve the problem of surfaces with different number of textures.

I am not suggesting at all that hardware should be modified to reflect this modularization. This can be handled in software by combining the different pieces to create hardware-oriented shaders. Instead of defining and compiling all possible shaders, OpenGL should assemble those little parts dynamically. Current hardware-oriented shaders should still be available so OpenGL can be left open to other types of applications than 3D rendering.

Shader types for programming what is not yet programmable:

Unpack shaders:
Defines how texture memory representation is converted to color input values for shaders: e.g. chroma subsampling (for 4:2:2 or 4:2:0 YUV pictures)

Blending shaders:
Defines how pixels are combined with those already in the frame buffer: e.g. use alpha for opacity combined with color multiplication for color filter

Varying interpolation shaders:
Defines how a varying variable is interpolated from one vertex to the next: e.g. linear, flatshading like,…

Frame buffer pixel format

I don’t know if it’s already possible (with framebuffer objects and multiple render buffers). What I suggest is a way to define how pixels are stored in the framebuffer. For example, I would like each pixel to have a color and a colorized alpha: RGBArAgAb. This could then be used in the blending shaders to simulate colorized glass with a reflection on it, for example. The color is the reflection (an environment map) and the alpha is the glass transmission (final color = destination * alpha + source (where each is a vec3)).

Shading rate

By now, fragment shaders are run for each fragment that is drawn. In RenderMan, a shading rate can be defined to specify how often a surface shader should be run. Values are then interpolated between the computed values. When the shading rate is 0, values are computed only on vertices (gouraud shading). This could be great to improve performance.

Time dimension

OpenGL has no knowledge of time. Should a time dimension be useful to compute effects as motion-blur in hardware?

Adaptive subdivision

It would be great to allow a surface to be subdivided adaptively at rendering depending of its size on the screen. This would make it possible to make real curved surfaces, that would appear smooth independently of how closely of the viewer it is shown. This would also allow real displacement mapping. Subdivision should be programmable with subdivision shaders.

Non-linear projection

How to setup a fisheye projection? With a vertex shader? Vertices positions would be correct, but lines would still be straight instead of curved. With a cube map? It works, but rendering has to be done to a texture and sampling precision is lost. Should projection be done at the fragment level? Does adaptive subdivision solve our problem?

The shader types you described can either be done with ease in GLSL or with some fiddeling in GLSL, so there is no real need for all of that.
But i would like a function to read the current pixel in the framebuffer.

Colorized alpha is something i like to, it’s possible to do with a clever hack, but it’s not yet supported in hardware.

Shading rate, this is not a problem and definitly not a openGL issue.

Time dimention, no it’s not needed, the CPU keeps track of that.

Adaptive subdivition.
Should be possible to some extent when the new geometry shaders arrive, but todays harware can push lots and lots of polys, so it’s not that mutch needed.

Non linear projection is a bit hard to do but it can be done if you don’t mind crappy FPS.

Originally posted by Hugues De Keyzer:
Shaders are great because they allow to redefine what the fixed function pipeline was previously doing. But for me, the current way of programming is not the most convenient. Different BRDF? Different shader. That is OK. But same BRDF with different number of textures, number of lights, light type, fog,… All need a different shader (or one big and slow shader). For that, the fixed function pipeline was more convenient.
For this you have static branching. Just specify a uniform bool and add an if-else-statement to your shader, then you can use the same shader for both cases. It’s as easy as setting a fixed function state.

DX10 class hardware will enable you to have custom interpolation.