i’m trying to improve my OVSM shader. instead of rendering the scene once for each cubemap face, i want to render to three faces at once. so i pass three view matrices to the vs, which passes on three light vectors to the fs.

vs:

uniform float u_LightRadius;
uniform mat4 u_ProjectionMatrix;
varying vec3 v_LightVectors[3];
uniform mat4 u_ViewMatrices[3];
void main()
{
// transform vertex - required, so just use one of the matrices
gl_Position = u_ProjectionMatrix*( u_ViewMatrices[0]*gl_Vertex );
// pass on light vectors
for( unsigned int i = 0; i < 3; i++ )
{
v_LightVectors[i] = u_LightRadius*( (u_ViewMatrices[i]*gl_Vertex).xyz );
}
}

however, only two different views are generated, each three times, namely the ones whose view matrix is passed to the gl_Position line. but shouldn’t the computation of the view vectors be independent of gl_Position?

The answer is simple: what you’re trying to do is impossible.
You have only one rasterization unit in the pipeline which is not capable of creating 3 completely different images at once.
These are 3 completely different views, so the same polygon will take different shapes and places on each one of them. On some it will be occluded, on some it will be culled, on some it will be out of frustum.
Even using geometry shaders you would produce 3 copies of this polygon that would be rasterized one by one, not all at once.

I have another idea to render 3 faces at once. Split your cubemaps into 2 strips, each containing 3 faces:

Strip 1: X+, Z-, X-

Strip 2: Y+, Z+, Y-
These are 2 strips in a shape of letter “U” - combined they give a cubemap. In this case you don’t use MRT’s at all - you just render 2 “panoramic” 270-degrees * 90 degrees views.

Note that this approach still requires breaking polygons that would appear on edges between cubemap faces if you’re using the standard approach of 6 independent frustums.

A solution for this is simple and can be implemented in vertex shader. You just need to assume that this is no longer cubemap, but sphere - there are no 6 flat near planes. All 6 near planes represent 1/6 of a sphere’s surface.
For example - if you look straight at the wall, normally all pixels would have equal depth component. In case of sphere, pixel at center is the nearest one, and in corners your wall would appear further away.

There is still problem with polygons that can wrap from one side to another. In this case you would have to split them in two using geometry shader or CPU. For example - if you look away from the wall, then on strip #1 you will still see it on X+ and X- faces, but not on Z- face.

And of course everything you render must not have large polygons. Your geometry must be capable of bending itself to the spherical view.

Even simplier solution is to use “dual-paraboloid” shadow maps, or somethin’ like that. Two half-spheres are simplier to implement than two U-views, but I think it requires even more tesselated geometry.
If I were you, I’d go for “dual-paraboloid” ones first, ane if you insist you can go for cubemaps with 3 faces rendered at once.