glGetTextureImage causing an invalid operation error?

Why would this code cause an INVALID_OPERATION error after the call to glGetTextureImage? GPU is an AMD 6600 on Windows 10.
https://registry.khronos.org/OpenGL-Refpages/gl4/html/glGetTexImage.xhtml

shared_ptr<Pixmap> RenderBuffer::Capture(const int index)
{
	if (index < 0 or index >= CountColorTextures()) return nullptr;
	auto tex = colorattachments[index];
	auto pixmap = CreatePixmap(tex->size.x, tex->size.y, tex->format);// , colorattachments[index]->format);
	glCheckError();
	if (not glIsTexture(tex->id))
	{
		GraphicsEngine::instance->Error("Error: Texture " + String(tex->id) + " is invalid");
		return nullptr;
	}
	glPixelStorei(GL_PACK_ROW_LENGTH, tex->size.x);
	glPixelStorei(GL_UNPACK_ROW_LENGTH, tex->size.x);
	glPixelStorei(GL_PACK_ALIGNMENT, 1);
	glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
	glCheckError();
	
	Assert(tex->glformat == GL_RGBA);
	Assert(tex->gltype == GL_HALF_FLOAT);
	Assert(pixmap->pixels->GetSize() == tex->size.x * tex->size.y * 8);
	Assert(glGetInteger(GL_PIXEL_PACK_BUFFER_BINDING) == 0);

	glGetTextureImage(tex->id, 0, tex->glformat, tex->gltype, pixmap->pixels->GetSize(), pixmap->pixels->Data());
	glCheckError();
	return pixmap;
}

There are many potential causes for this listed in the:

You’re just going to have to go down them and refute each one until you find the cause.

Also, I’m not familiar with AMD OpenGL driver behavior. But just to make sure that your GL error checks reflect not just those errors found in queuing the previous commands but also the execution of those commands, you might (for testing only!) insert a glFinish() right before that last GL error check. Something like this:

    Assert(glGetInteger(GL_PIXEL_PACK_BUFFER_BINDING) == 0);

    glFinish();       // <--- Added!
    glCheckError();   // <--- Added!

    glGetTextureImage(tex->id, 0, tex->glformat, tex->gltype, pixmap->pixels->GetSize(), pixmap->pixels->Data());
    glCheckError();

What made me think of this is that glGetTextureImage() is going to potentially trigger a full pipeline flush doing the readback (unless to PBO). So it effectively may trigger a glFinish() internally. (… which may have something to do with why it’s discovering errors. possibly errors instigated by commands queued prior to the glGetTextureImage() call).

An INVALID_OPERATION error is generated by GetTextureImage if
the effective target is not one of TEXTURE_1D, TEXTURE_2D, TEXTURE_-
3D, TEXTURE_1D_ARRAY, TEXTURE_2D_ARRAY, TEXTURE_CUBE_MAP_-
ARRAY, TEXTURE_RECTANGLE, or TEXTURE_CUBE_MAP (for GetTextureImOpenGL 4.6 (Compatibility Profile) - May 5, 2022
8.11. TEXTURE QUERIES 284
age only).

So it looks like you cannot retrieve the contents of a 2D multisample texture?

There’s no mechanism in OpenGL to allow the CPU to read the contents of a multisample texture. The best you can do is copy it (with shader reads) into a non-multisampled texture with a resolution of the original multipled by the sample count. You’d copy each sample into a different texel in the output.

1 Like

Alternatively, do a glBlitFramebuffer() to downsample the texture from MSAA-to-1x, and then glGetTexImage() on the result.

But if you do really need the values of the individual subsamples … what Alfonse said. texelFetch() in a shader.

1 Like