unexpected GL_INVALID_OPERATION on glGetBooleanv (while using QtQuick)


I’m using OpenGL 2.1 in combination with QtQuick (Qt5.4) on a Mac (os x 10.10.3, ati radeon hd 4670, Xcode 6.2). I’ve created a custom QtQuick control that manages a QSGSimpleTextureNode and QSGTexture (the latter is based on a gl texture name I provide, not created/managed by Qt). In the control’s updatePaintNode method I redraw the texture, so Qt can show it. Because both my code and QtQuick use the same OpenGL context, I have to reset the state whenever control goes from Qt to my code or back. On Windows all of this works fine. When I try running the same code on Mac, I get weird errors, at a very specific point in the code that saves the current OpenGL state:

void Ogl2TextureState::SaveTextureUnits()
  for (size_t unit = 0; unit < m_textureUnits.size(); ++unit)
    glActiveTexture(GLenum(GL_TEXTURE0 + unit));

    for (size_t i = 0; i < TextureIds.size(); ++i)
      GLboolean isEnabled = GL_FALSE;
      glGetBooleanv(TextureIds[i].name, &isEnabled); // <-- invalid operation when (unit == 8) and (i == 0)
      m_textureUnits[unit][i].enabled = (isEnabled == GL_TRUE);

      GLint boundTexture = 0;
      glGetIntegerv(TextureIds[i].bindingName, &boundTexture);
      m_textureUnits[unit][i].boundTexture = GLuint(boundTexture);

Where TextureIds is defined like this:

  struct TexureId
    GLenum name;
    GLenum bindingName;

  const std::array<TexureId, 4> TextureIds =
    { {
      } };

Don’t mind the m_textureUnits variable: that’s simply a vector containing a struct for each texture unit (my gpu has 16, so that’s the size of the vector).
The point is that when this code reaches the glGetBooleanv(GL_TEXTURE_1D) for GL_TEXTURE8 an invalid operation error occurs. Since this call really should not fail, I suspect there is a memory error, or a threading issue (QtQuick uses a main gui thread, and a rendering thread. I think I’m doing everything as it should be, but with these threading things, you never know :wink: ). Unfortunately I didn’t manage to pinpoint the problem.
When I use Apple’s OpenGL Profiler to break on errors, it indeed breaks on this glGetBooleanv call. Inspecting the texturing state doesn’t show anything unusual. GL_ACTIVE_TEXTURE is GL_TEXTURE8 and GL_TEXTURE_BINDING_1D is 0 (as it should be: I don’t use that many textures, and I guess neither does Qt). I’ve also put some breakpoints on CGL[Create][Update][SetCurrent]Context, but nothing special there.
In an attempt to influence the timings of the calls (remember: QtQuick uses 2 threads), I’ve also tried putting some code that writes output to console, both at the beginning of the SaveTextureUnits() method and right before the offending call to glGetBooleanv. But that didn’t change anything. It’s always this specific call to glGetBoolean that fails (I also find it suspicious that the first 8 (2^3) texture units don’t result in errors).
I have no idea how to proceed from here, so if anyone has any bright ideas - or basically any suggestion - that would be awesome! Thanks for reading this far :slight_smile:

In other words, you are querying the fixed-function enable state on unit 8, which is beyond GL_MAX_TEXTURE_UNITS. That should error.

Read Nvidia’s FAQ summarizing which limits apply to which state.

This entry in the wiki states that GL_MAX_TEXTURE_UNITS should not be used any more in OpenGL 2.0 and higher. Following your suggestion resolves the issue, though. Thanks a lot!
(I also had a typo in my code: the docs for glActiveTexture state that the number of texture units is the maximum of GL_MAX_TEXTURE_COORDS and GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS. Instead of GL_MAX_TEXTURE_COORDS, I was using GL_MAX_TEXTURE_IMAGE_UNITS. That might also add to the explanation :tired:)