Optimal XrFovf -> GLSL projection transform?

Do any of you know of code for converting the XrFovf vec4 into a projection transformation in GLSL?

EDITED: At Optimal XrFovf -> GLSL projection transform? - #5 by Mick_P below there’s shader code to show what I mean.

More thoughts: The examples I’ve seen examples do nz*tanf(x) on the CPU side (is there an argument for using the angles in GLSL?) and then inside GLSL I can see building a mat4 and just multiplying the vector (edited: this is what my code is doing so far) but AFAIK the GLSL compilers don’t do optimization like D3D’s, and you know, I could sit down and work out the transform long hand and do eliminations of the zero cells, but I’m thinking it would not be highly parallel, so the next step is trying to combine steps by arcane vec4 manipulations. I think either someone has done this or it’s not seen as a bottleneck.

I do have a bonus question, I’m wondering how does using the per-view parameters as instanced vertex attributes compared to updating shader registers. It seems intuitive to use attributes, but I don’t have any second-hand knowledge about how slow register updates are by comparison. But I assume they’re slow.

Sorry to ask, but search engine seem to not be able to produce relevant results for programming topics any longer.

You can look to see how hello_xr does it: it has code for OpenGL, OpenGL ES, Vulkan, D3D11, D3D12…

I’m sorry to be a baby, I’m having trouble finding a link by myself. Would this (OpenXR-SDK-Source/vert.glsl at main · KhronosGroup/OpenXR-SDK-Source · GitHub) be it? What I’m doing is passing the 4 fov values (lightly processed) as a vec4 to the shader as an instanced vertex attribute.

I figure that’s better than trying to do an indexed lookup, plus for arbitrary (quad?) number of views, storing multiple mat4 objects in constant buffers seems impractical.

Edited: My app needs to render dynamic geometry side-by-side and it would be too much implementation logic to store the geometry in dedicated buffers long enough to render the whole scene twice (the buffers revolve round-robin style.)

Ah, they are actually turned into a projection matrix before being stuffed into a buffer in hello_xr: maybe clearer in the OpenGL shader here OpenXR-SDK-Source/graphicsplugin_opengl.cpp at main · KhronosGroup/OpenXR-SDK-Source · GitHub

The actual conversion is done in the utility function XrMatrix4x4f_CreateProjectionFov and friends, referenced from this code setting up the MVP matrix: OpenXR-SDK-Source/graphicsplugin_opengl.cpp at main · KhronosGroup/OpenXR-SDK-Source · GitHub The implementation of those functions is in this header, which is C, not glsl, but should not be too hard to convert: OpenXR-SDK-Source/xr_linear.h at main · KhronosGroup/OpenXR-SDK-Source · GitHub

Yes, maybe this will help explain what I mean. This is the code I currently have:

WARNING: This is GLSL code, but written to be compatible with HLSL. I Might rewrite it, but please think of it as pseudo-code. float4=vec4, and so on.

	float4 openxr_fov(float4 v)
	  //source: XMMatrixPerspectiveOffCenterLH

		//rh/rw might be equal to Xr_vpR.xy?
		float4 f = Xr_fov;
		float rw = 1.0/(f.y-f.x); //Xr_vpR.x?
		float rh = 1.0/(f.z-f.w); //-Xr_vpR.y?
		float nz = Xr_vpS.x;
		float fz = Xr_vpS.y;
		float range = fz/(fz-nz);

		//TODO: hand optimize this once works?
		//NOTE: GLSL doesn't optimize anything
		return mul(transpose(float4x4(
		float4(-(f.x+f.y)*rw, //left+right
			   -(f.z+f.w)*rh, //top+bottom

I don’t know if glCompileShader will optimize out that “transpose” or not but it’s just what I had to do to get it to work (I could transpose it by hand.) But what I’m asking is can this be done more efficiently by not even using a mat4? And if anyone in the OpenXR or prior VR communities has stock code for doing so? (Of course I could develop it myself up to a point, but I felt better to ask, and if so it would be a good resource for others to have a discussion that might be indexed by www-spiders.)

EDITED: Note, in HLSL I’d probably trust the compiler to aggressively optimize this, but I don’t think GLSL takes that view. I just thought this might be an interesting question in the abstract. Perhaps it’s not.