OpenGL ES 3.0: writing to multiple buffers

One section of the OpenGL ES 3.0 spec is not completely clear to me.

https://www.khronos.org/registry/OpenGL/specs/es/3.0/es_spec_3.0.pdf, page 185:

If an OpenGL ES Shading Language 1.00 fragment shader writes to gl_FragColor or gl_FragData, DrawBuffers specifies the draw buffer, if any, into which the single fragment color defined by gl_FragColor or gl_FragData[0] is written. If an OpenGL ES Shading Language 3.00 fragment shader writes a user-defined varying out variable, DrawBuffers specifies a set of draw buffers into which each of the multiple output colors defined by these variables are separately written.

I understand this the following way:

  1. If I use OpenGL Es 3.0 and write shaders using GLSL 1.0, then the only way I can write to 2 buffers at once (COLOR0 and COLOR1) is to manually specify what gets written to gl_FragData[0] and gl_FragData[1] in my fragment shader. If I then want to get back to writing only to COLOR0, I must switch glPrograms.
  2. If on the other hand I use OpenGL ES 3.0 and write my shaders using GLSL 3.0, then I can simply define my output using a single varying out variable, and dynamically switch on and off writing to COLOR1 with calls to DrawBuffers() and with no need to swap glPrograms.

Is the above correct?

A GLSL ES 1.0 program cannot write to multiple colour buffers.

This isn’t a limitation of GLSL ES 1.0 itself, but of the various OpenGL ES versions which might use such a program.

While the gl_FragData array was included in the GLSL ES 1.0 specification:

  1. OpenGL ES 2.x doesn’t have glDrawBuffers() (or any framebuffer colour attachment other than GL_COLOR_ATTACHMENT0), so there wasn’t really much point in having gl_FragData. Nothing in the ES 2.x specification says what would happen to a value written to gl_FragData[1].

  2. The OpenGL ES 3.x specification basically says that GLSL ES 1.0 shaders are limited to a single colour output (gl_FragColor or gl_FragData[0]) and any other elements of gl_FragData are ignored. This provides consistency with such shaders being run under ES 2.x.

Related to MRTs on OpenGL ES…

I don’t know what GPU(s) you’re targetting, but since you’re developing for GL-ES, it’s likely that you’re targeting tile-based GPUs. If so, you should consider whether “pixel local storage” and “shader framebuffer fetch” would be useful to you (you may have already). They can save a lot of bandwidth (and memory) when using MRTs on a mobile GPU. Just websearch those terms, and you’ll hit several good tutorials.

You can also go straight to the 3 relevant extension specs on the OpenGL ES Registry. It’s probably worth checking whether your driver supports these extensions. If not, ask your GPU vendor(s) how you can be sure to get the bandwidth+memory savings when using MRTs on their GPUs in the absence of those extensions.