Is it possible to attach a texture array of arbitrary depth to a color buffer attachment point? If so, how does one write to it in a fragment shader? For instance, since the “n” in gl_FragData[n] refers to the attachment point, how would I refer to a specific level of a texture array attached to a particular attachment point? That is, I’m looking for something that’s used like “gl_layer” is used in geometry shaders, but for specifying a texture array level. If this isn’t possible, then is it true that the best one can do in a fragment shader is write up to four values to each of the textures attached to a framebuffer? In that case, if the maximum number of color buffer attachment points is, say, eight, then the maximum number of values one could write out to in a fragment shader would be 32.
Specifically, I’m rendering in a spectral domain, preferably with each fragment represented by an arbitrary number of spectral samples that are calculated in a fragment shader, after which a different fragment shader converts the samples to RGB values in a texture applied to a full-screen quad. Basically, I’d like to be able to output an arbitrary number of values in a fragment shader.
So, when rendering to multiple targets one is limited to a single 2D texture per color buffer attachment point? If that’s the case, then a fragment can’t output more than 4N color values, where N is the number of available attachment points, right?
If you choose RGBA_32F format for color and DEPTH_COMPONENT_32F for depth, you’ll have (M*4 + 1) float values output, where M is maximum MRT number. If the precision of 32bit float is too much, you can encode several values into it.
Also, as you are trying to do non-graphics-related calculations, consider using OpenCL (or any other GPGPU technology): it might afford better flexibility for the output.
Thanks for the info and advice. Being a GLSL and OpenGL newbie, I didn’t know that the current OpenGL model was so limited. Now that I know that a fragment shader can’t output more than 4M+1 values, I can quit banging my head against the wall looking for a way to output more than that. My application has already been implemented using CUDA, but I’m now trying to port it to GLSL, if possible. Unfortunately, I do need 32-bits of precision.