I’m sort of stuck trying to improve the visual quality I can generate with my engine. Nvidia of course doesnt do float+multisample framebuffers… ATI does but doesnt do floating point blending, and I have Nvidia hardware anyway… so I thought ok I’ll just supersample my framebuffer.
I’m using FBO’s and have a float variable that can be set by the user defining how much to supersample by. I thought for speed I’d try supersampling at between 1x and 2x (eg 1.5x). Now, this looks alright, but its nothing like 2x! I’ve tried supersampling at 3x and it actually doesnt look as good as 2x, I’m guessing because of the way the downscaling works. 3x is very slow anyway. At 4x I run out of memory (128mb card).
I was just wondering if anyone out there has tried something similar and maybe came up with a trick or 2 to force the hardware to read more samples as it downscales. Obviously I’m using GL_LINEAR for min filtering, without it the whole exercise is pointless
I tried using different anisotropic filtering settings (upto 16x) but it didnt seem to do anything, I would’ve thought it’d read more samples, but now have remembered it of course depends on the angle you see the surface at in perspective, so it wouldnt do anything here.
Maybe theres a way of jittering… something…? I thought of possibly blurring the 2x or say 2.5-3x supersampled frame, then scaling THAT down, which would indirectly introduce more samples, and may not look blurred at the lower resolution, but I have yet to try this out. Of course, this all hits you hard with speed too.
Any advice would be much appreciated