[solved] textureSize with GL_LINEAR_MIPMAP_LINEAR

This is just for information. I created a non-mipmap texture with

glBindTexture(GL_TEXTURE_2D, c_Textures[RESOLVE]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, GLsizei(m_Width), GLsizei(m_Height),GL_FALSE,GL_RGBA,GL_UNSIGNED_INT,0);

when I tried to get the size in the fragment shader

  ivec2 sz = textureSize(u_ResolvedColourTexture,int(0));

the size returned is 0,0. Replacing GL_LINEAR_MIPMAP_LINEAR with GL_LINEAR returned the correct size

That is probably an unfortunate combination of application error (providing an incomplete texture) and a driver bug (not correctly returning the size of an incomplete texture).

My general feeling with GL would be that this is actually intended behavior, but as you mentioned it i can find any text that would allow implementation to return garbage (or zero) from textureSize when the texture is incomplete. This seems counter intuitive for me, as it essentially requires the driver to pass an array of uniforms to the shader (i.e. this information probably can’t be taken from surface setup that hw needs).

If this is a bug, then id be willing to bet its a spec bug instead.

If a texture is incomplete, you cannot get any information out of it. This applies to both texels, size and other stuff.

Well, that’s just your assumption. The hardware is free to get the size information from anywhere. Yes, it might be an internal array of uniforms, but in practice, it’s very likely that the hardware actually has this information from the internal data structure that corresponds to a sampler uniform.

The main reason I posted this was because the texture creation logic did not generate any error from either glCheckFramebufferStatus nor glGetError

If a texture is incomplete, you cannot get any information out of it. This applies to both texels, size and other stuff.

Can you cite the corresponding paragraph from the specs?
There is language talking about sampling from incomplete textures, but querying the size of such a texture has not been deemed undefined.

Yes, this all is based on assumption that textureSize has to work with incomplete textures, and frankly I would be very surprised if anyone implemented it that way.

Its highly improbable that HW would support specifying misbehaved textures (as allowed by GL). Say, setup a surface that has first level of size 1x1 RGBA8 format, second level is empty an third level is 100x1 DEPTH_COMPONENT8. Now HW could be made to ‘support’ textureSize on such abomination, but it would probably not be very good idea - i assume that serious HW vendors don’t waste silicon to support idiotic cases written (?) in GL spec. Given this assumption pretty much only way to do that would be by using uniform arrays but this is wasteful and again not very bright.

By what wording? I see only this in core spec:

Incomplete textures (see section 3.9.14) are considered to return a texture source color of (0, 0, 0, 1) for all four source texels.

Which handles all sampling functions. There are 2 other functions though that have to do with samplers: textureSize and textureQueryLod. textureQueryLod is explicitly allowed to return garbage on incomplete textures (as per GLSL spec), textureSize is not.
There is nothing more regarding incomplete textures in both specs afaik. Do you see something else relevant to this case?

Then I think it’s most probably a specification bug. Actually, this is not the only specification bug in the “Texture Size Query” section of chapter 2.11.7. If you look closely, it says in the last sentence that the “layer index” is returned for array textures, but it should say “number of layers” instead.

I’m just saying, I wouldn’t expect any proper behavior from the shader’s point of view if you try to do anything with a sampler uniform that references an incomplete texture. It may be a specification bug, but I don’t think it’s a driver bug.