Here are some OpenGL improvement suggestions I thought of. Many come from RenderMan. I know that OpenGL is not designed to be a real-time RenderMan but some ideas could drive OpenGL development towards very interesting concepts. Anyway, most people are trying to use OpenGL as a kind of real-time RenderMan.
New shader types
Shaders are great because they allow to redefine what the fixed function pipeline was previously doing. But for me, the current way of programming is not the most convenient. Different BRDF? Different shader. That is OK. But same BRDF with different number of textures, number of lights, light type, fog,… All need a different shader (or one big and slow shader). For that, the fixed function pipeline was more convenient.
I suggest to create new types of shaders to subdivide processing and allow a more modular shader design. Example of new types:
Light source shaders:
Defines how a light illuminates surfaces: e.g. directional light, spot light, point light, area light, projected texture,…
Defines how surface reacts to incoming light: e.g. phong, blinn, celshading, anisotropic,… Several shaders could be combined for a single surface for defining multi-layered materials.
Defines how texture pixels are used when a texture lookup is done: e.g. nearest neighbor, linear, cubic, Catmull-Rom, Lanczos,…
Defines how pixels are drawn to the frame buffer: e.g. eye-space fog, full-screen effects,…
These are just examples on how we could modularize shaders. By the way, this does not yet solve the problem of surfaces with different number of textures.
I am not suggesting at all that hardware should be modified to reflect this modularization. This can be handled in software by combining the different pieces to create hardware-oriented shaders. Instead of defining and compiling all possible shaders, OpenGL should assemble those little parts dynamically. Current hardware-oriented shaders should still be available so OpenGL can be left open to other types of applications than 3D rendering.
Shader types for programming what is not yet programmable:
Defines how texture memory representation is converted to color input values for shaders: e.g. chroma subsampling (for 4:2:2 or 4:2:0 YUV pictures)
Defines how pixels are combined with those already in the frame buffer: e.g. use alpha for opacity combined with color multiplication for color filter
Varying interpolation shaders:
Defines how a varying variable is interpolated from one vertex to the next: e.g. linear, flatshading like,…
Frame buffer pixel format
I don’t know if it’s already possible (with framebuffer objects and multiple render buffers). What I suggest is a way to define how pixels are stored in the framebuffer. For example, I would like each pixel to have a color and a colorized alpha: RGBArAgAb. This could then be used in the blending shaders to simulate colorized glass with a reflection on it, for example. The color is the reflection (an environment map) and the alpha is the glass transmission (final color = destination * alpha + source (where each is a vec3)).
By now, fragment shaders are run for each fragment that is drawn. In RenderMan, a shading rate can be defined to specify how often a surface shader should be run. Values are then interpolated between the computed values. When the shading rate is 0, values are computed only on vertices (gouraud shading). This could be great to improve performance.
OpenGL has no knowledge of time. Should a time dimension be useful to compute effects as motion-blur in hardware?
It would be great to allow a surface to be subdivided adaptively at rendering depending of its size on the screen. This would make it possible to make real curved surfaces, that would appear smooth independently of how closely of the viewer it is shown. This would also allow real displacement mapping. Subdivision should be programmable with subdivision shaders.
How to setup a fisheye projection? With a vertex shader? Vertices positions would be correct, but lines would still be straight instead of curved. With a cube map? It works, but rendering has to be done to a texture and sampling precision is lost. Should projection be done at the fragment level? Does adaptive subdivision solve our problem?