Hello,
I’m trying to implement a simple bilinear filter in an arbitary mipmap level in a vertex shader.
At the moment, I pass in as uniforms the dimensions of the texture and an individual texel in the texture. However, at different mipmap levels the texture and texel sizes are different, depending on the level I’m sampling.
Is there any (fast) way I can calculate the real dimensions of the texture (and texel) from an arbitary mipmap level, in a (GLSL) vertex shader? Or will I have to calculate all levels on the CPU and pass them as a uniform array of sizes?

Here’s my function at the moment…

vec4 texture2DLod_bilinear(in sampler2D tex,
in vec2 t,
in float lod)
{
int l = int(lod); // ????
vec2 textureSize = uni_textureSize; // ????
vec2 texelSize = uni_texelSize; // ????
vec2 f = fract(t * textureSize);
vec4 t00 = texture2DLod(tex, t, lod);
vec4 t10 = texture2DLod(tex, t + vec2(texelSize.x, 0.0), lod);
vec4 tA = mix(t00, t10, f.x);
vec4 t01 = texture2DLod(tex, t + vec2(0.0, texelSize.y), lod);
vec4 t11 = texture2DLod(tex, t + vec2(texelSize.x, texelSize.y), lod);
vec4 tB = mix(t01, t11, f.x);
return mix(tA, tB, f.y);
}

I really don’t know the answer of what you want, I just guess there’s no real solution.

However I wonder if cpu calculations could help you to find out the real size of your mipmap(s). This will be true only when you explicitly define the image for each lod of texture (so using gluBuild2DMimpmapsLevel). And still, how can you ensure that an effective image is used instead of another one ? Only GL knows (as far as I know) which texture it will use regarding the area the texture is covering. So even with using texture2Dlog, this can’t help you.
Can you give me some hints regarding this point ? (I’m still a beginner, you know

Yes, I suppose in the general case it’s not safe to assume that a lower mipmap is half the size of the current mipmap - but in my case it is safe to assume, as it’s always true.
I think you’re confusing my question with a question about fragment shaders - I’m talking about sampling a texture in a vertex shader, at the vertex level you have to calculate which mipmap (LOD) to sample from (because the hardware doesn’t have slope info or whatever). So I know the mipmap level I’m sampling from.
I’ve temporarily put this while loop in my vertex shader…I know, I know, horrible.

vec4 texture2DLod_bilinear(in sampler2D tex,
in vec2 t,
in float lod)
{
int l = int(lod);
vec2 textureSize = uni_textureSize;
vec2 texelSize = uni_texelSize;
while (l>0)
{
textureSize*=0.5;
texelSize*=2.0;
--l;
}
vec2 f = fract(t * textureSize);
vec4 t00 = texture2DLod(tex, t, lod);
vec4 t10 = texture2DLod(tex, t + vec2(texelSize.x, 0.0), lod);
vec4 tA = mix(t00, t10, f.x);
vec4 t01 = texture2DLod(tex, t + vec2(0.0, texelSize.y), lod);
vec4 t11 = texture2DLod(tex, t + vec2(texelSize.x, texelSize.y), lod);
vec4 tB = mix(t01, t11, f.x);
return mix(tA, tB, f.y);
}

I think built-in uniforms for fractional texture/texel size for each mipmap level would be very useful.
Example:
vec2 textureSize2D(sampler2D tex, int level)
vec2 texelSize2D(sampler2D tex, int level)

It’s useful info if you’re doing any kind of filtering etc.
Oh, and while we’re at it, a uniform for number of mipmap levels would also be useful:-
int numMipmapLevels(sampler2D tex)
Granted, all this is information the user can calculate and pass in as uniforms, but (especially in vertex texturing) they’re so useful I think they belong in the language itself - otherwise each vertex shader that accesses textures is going to be bogged down with uniforms containing basic GL state info.
Of course, if there’s a quick way I can derive this info from the texture size uniform, that’s not so bad - but as you say, a mipmap doesn’t have to be half the size of its parent.