MipMapping and Dynamic Cube Maps

I had a question about dynamic cupemaps and mipmapping and the like. I am using the pbuffer and generating them dynamically and everything works great. I am using some nvidia SDK code to do reflection and refraction of the cube map onto a simple sphere. And using the SGI Extension to generate mipmaps on the fly. I was wondering what type of mipmapping people do. THe problem is that the image in the pbuffer is filtered and is using mipmaps. That image is copied, mipmapped (SGI) and then filtered when it is drawn on the sphere. The image tends to be quite blurry. This is largely the result of a small amount of the cube map actually being drawn on the sphere. The refraction and reflection will tend to magnify the image. Its a shame that such a small percentage of the cubemap is actually drawn on the sphere. THe pbuffer/cubemaps were at 256 and 512 and both looked blurry. Out of curiousity I raised the size to be 1024x1024 and wow it looks great, but the 14 fps sucks. Obviously I expected that. The increasing resolution of the pbuffer/cubemap increases the resolution of the small area of the cubemap that actually gets drawn.

I am sure this is all quite confusing and I appologize. I am just curious what size pbuffer/cubemaps people use and what type of filtering and mipmaps they use when both generating the cubemap and when rendering the cubemap on an object.

Thanks.

there is a texture-lod extension with wich you can say how fast it has to go into higher layers of the mimpaps… if you set this to a low or high value (dunno ) then it will always try to take one of the most highres images…

Thats very interesting, didn’t even think of that. Sortof isn’t the ideal solution though, why should you generate mipmaps and then force them not to be used. Although I do understand you must generate all mipmap levels for texturing to work.

Thanks.

i said one of the most highres images… you can say how fast he goes into the lowres… means that it will use the mipmaps but it will not use the low ones as it does for now…

Hi Devulon,

This is sort of a fundamental limitation of render-to-texture for dynamic textures. I’ve thought about this problem a lot, and it’s pretty hard. If you had some way of rendering only the samples that you needed, and only at the frequencies that they are needed (a significant complication when you consider general dependent texture reads), then you could efficiently render-to-texture with a very large effective “base level” size (like 1024x1024).

Thanks -
Cass

This is all good information the only thing I am still curious about is the type of filtering used with cubemaps.

What filtering do you guys do when building the cube maps and when drawing the cubemaps?

Thanks Guys.