I’m having a bit of trouble deciphering the glReadPixels documentation in figuring out what format/type pairs are guaranteed to be available on all systems.
In the “Notes” section, the documentation seems to imply that the only valid pairs are:
GL_RGBA / GL_UNSIGNED_BYTE
GL_RGBA_INTEGER / GL_INT
GL_RGBA_INTEGER / GL_UNSIGNED_INT
I am currently unable to read the GL_IMPLEMENTATION_* values on my device, as I am somehow receiving a GL_INVALID_OPERATION error from glGetIntegerv, but in any case, I am confused as to the results.
On my Android phone, GL_RED/GL_FLOAT works fine, as does GL_RGBA/GL_FLOAT, which is two additional pairs not mentioned in the Notes section as guaranteed to be valid. At the same time, the pairing GL_RED/GL_UNSIGNED_BYTE results in GL_INVALID_OPERATION, so I think it is safe to say that it is not the case that GL_RED and GL_FLOAT, even if they are the GL_IMPLEMENTATION_* values, simply pair with everything else valid.
In light of this, I am not quite sure how to interpret what I’m seeing. My two attempts are:
GL_RGBA/GL_FLOAT is a valid pairing on all platforms, and then GL_RED/GL_FLOAT is the single GL_IMPLEMENTATION_* pairing that’s available.
Multiple implementation-specific pairings may exist depending on the device, but only one is “discoverable” by reading the GL_IMPLEMENTATION_* values.
Of course, it’s quite possible that neither of the above are true, and I’m completely misinterpreting things. I would like for 1. or something similar to be true, as it would allow me to safely read floats on all platforms, but I’m not confident in assuming that it’s the case. How should I be interpreting which pairs are valid?
If I get the glGetIntegerv call working, I’ll update this post with the values that I receive.
The values returned from these queries may depend upon the current read framebuffer. In order to query these values, there needs to be a valid (framebuffer-complete) framebuffer (either the default framebuffer or a framebuffer object) bound to GL_READ_FRAMEBUFFER; in the case of a FBO, the selected read buffer (as in glReadBuffer) must have a valid attachment. Otherwise, glGetIntegerv generates GL_INVALID_OPERATION.
An implementation must support at least one format/type pair for any internal format and must describe that via the GL_IMPLEMENTATION_* queries. It’s free to support additional formats, but there’s no mechanism for querying that (other than executing a glReadPixels command and checking for errors).
The specification itself offers a bit more clarity (§4.3.1):
Which implies that you can read pixels using some format/type pair which avoids loss of information. Where an internal format has multiple entries, the implementation is only required to support one of them. E.g. GL_RGB16F appears for both GL_RGB/GL_HALF_FLOAT and GL_RGB/GL_FLOAT. So if you have a framebuffer with a GL_RGB16F colour attachment, querying the implementation-dependent type/format should return one of those. The implementation may support using either or it may only support using the reported combination.
Sorry for the delayed reply. Wanted to make sure I tested some things relatively thoroughly before writing a response, but once that happened, some other stuff came up that distracted me.
Ah, that would explain it. I was calling it before my call to glFramebufferTexture2D(), thinking there was a single “static” value for those two queries. Did not realize that the value they return depends on the current colour attachment. I can see now that I missed reading the line from the documentation that says, “The implementation chosen format may also vary depending on the format of the currently bound rendering surface.” Sorry about that.
So I can be guaranteed that, for whichever internalFormat I pass into glTexImage2D (other than depth/stencil ones), GL_IMPLEMENTATION_* will hold info on a valid way to read it? I.e., it’s not possible to create a valid framebuffer that cannot be read in some way via glReadPixels()?
Or, specifically, I can have an GL_R32f attachment and can be assured that, on all ES 3.0-supporting smartphones, the call glReadPixels(…, GL_RED, GL_FLOAT, …) should be able to read them?
Thank you very much for your response! It cleared up a lot of my confusion.
That’s my interpretation of the spec. Table 3.2 only has one entry with R32F in the “Internal Format” column, which is:
Format | Type | Bytes | Internal Format
RED | FLOAT | 4 | R32F, R16F
I interpret “from among those defined in table 3.2” to mean that it has to choose an entry with the correct internal format. Most internal formats (including R32F) only have one matching entry; the main exceptions are normalised formats with fewer than 8 bits can be returned using either the matching packed type or bytes, R11F_G11F_B10F can be returned using the matching packed type, half-float or float, and *_16F formats can use half-float or float. AFAICT, all entries have external format/type pairs with at least as much precision as the internal format, meaning that extraction should be lossless.
Also, the ES 3.0 specification says there are two possible combinations; the first can be inferred, the second must be queried. But the first case doesn’t mention floating-point formats. The ES 3.2 specification has an additional sentence:
I suspect that the lack of such language in the 3.0 and 3.1 specs was an oversight rather than an intentional decision to leave it unspecified. The 3.2 spec doesn’t mention this change in the changelog.