I’m just wondering how to do that. I’ve got a first depth map, say 1024x1024 (created with the ARB_depth_texture extension), and i want to “downsample it”, say to a 512x512 destination depth map.
My first idea was simply to switch to ortho mode and display a 512x512 quad using the 1024x1024 depth map as input, but i have the intuition it’s not as easy as that.
Any idea ?
Re-rendering the geometry that generated the original depth map is completely out of the question, then?
You could render a quad as you suggest, and enable a fragment program that sets the fragment’s Z to the value read from the depth texture. Unfortunately, GeForce FX cards only return an 8-bit value when reading a depth texture in a shader. Not sure if this is also the case for ATI.
Alternatively, you could use a floating point texture instead of a depth texture, or pack/unpack Z into RGB.
Hmm… Actually, isn’t there some way you can exploit automatic mipmap generation?
I can regenerate the depth map from the original geometry, yes… but it’s for an adaptative shadow maps algorithm, so the goal is to avoid having to rerender the SM if possible. In the case where the SM resolution is reduced, i thought one optimization was to just downsample the original SM.
I don’t think automatic mipmaps generation would work, as the goal is really to generate a smaller SM, to free the video memory taken by the bigger one. Shadow maps in my scene can go up to a hundred Mb.
I’m not very hot for using pixel shaders. I could always do that, but i’m looking for a more simple solution first (if there is any)…
You can specify base mip and target mip levels when using SGIS_MIPMAP_THING. That’s supposed to be the fastest way to downsample a texture, actually you’d better render your map in ortho at 512*512 and grab it again (and you can activate shaders during this rendering to soften a bit, or to AA).
Note that nVidia drivers have an issue with this extention, and that they are fixing it (“some day”).
Originally posted by SeskaPeel:
actually you’d better render your map in ortho at 512*512 and grab it again
Well that’s the thing, you see – he’s got a depth map, so you can’t just render that to the color buffer and grab it. You have to render it to the Z buffer, which AFAIK you can only do by using a shader.
But then, reading a depth map in a shader will only return an 8-bit value on NVidia cards (or at least, that’s what Simon Green says in one of his articles in “GPU Gems”). I wonder why this is the case, though.
Ho OK I got the point …
With automatic mipmap generation (I insist, it’s poorly implemented in current drivers), you can ask to generate only the second mip level (mip level 0 = 1024, mip level 1 = 512), so you won’t generate whole mip hierarchy, and then CopyTex it into another 512*512 texture. This could be accelerated with PBO.
As i understand it, glCopyTexXXX with a destination depth texture copies the frame buffer’s depth into the texture data. So the problem is to render an input depth map at a smaller resolution into the depth buffer.
But how to do that ? If you just draw a quad with the input texture, it will render the depth data into the color buffer… it will NOT render the depth data into the depth buffer…
I could not find anything in the specs about how automatic mipmaps generation and depth textures interact with each other… i can already smell the bugs…
You need to use PBO to do the transfer.
I didn’t played enough with it to be sure, so I might be saying bulls***s
The problem will be to set the read pointer to your 10241024 texture, and ask a transfer from the second mip level of this texture to the first one of your 512512 texture.
To some extent, you can try TexImage2D, or CopyTexsubImage2D
Again, I’m no expert with PBO, and I may be leading you on a wrong track.
The method has to work both on ATI and NVidia cards… PBO is out of question then.
Automatic mipmap generation won’t get you anywhere, actually. The mipmaps will only be recalculated when you update the base image, which you want to avoid.
The only remaining solutions I can think of are (a) re-rendering the geometry, (b) using a depth-replace pixel shader © glGetTexImage2D() to system memory and downsampling on the CPU