I had a question about dynamic cupemaps and mipmapping and the like. I am using the pbuffer and generating them dynamically and everything works great. I am using some nvidia SDK code to do reflection and refraction of the cube map onto a simple sphere. And using the SGI Extension to generate mipmaps on the fly. I was wondering what type of mipmapping people do. THe problem is that the image in the pbuffer is filtered and is using mipmaps. That image is copied, mipmapped (SGI) and then filtered when it is drawn on the sphere. The image tends to be quite blurry. This is largely the result of a small amount of the cube map actually being drawn on the sphere. The refraction and reflection will tend to magnify the image. Its a shame that such a small percentage of the cubemap is actually drawn on the sphere. THe pbuffer/cubemaps were at 256 and 512 and both looked blurry. Out of curiousity I raised the size to be 1024x1024 and wow it looks great, but the 14 fps sucks. Obviously I expected that. The increasing resolution of the pbuffer/cubemap increases the resolution of the small area of the cubemap that actually gets drawn.
I am sure this is all quite confusing and I appologize. I am just curious what size pbuffer/cubemaps people use and what type of filtering and mipmaps they use when both generating the cubemap and when rendering the cubemap on an object.