6800, float FBO and supersampling!

Hi All,

I’m sort of stuck trying to improve the visual quality I can generate with my engine. Nvidia of course doesnt do float+multisample framebuffers… ATI does but doesnt do floating point blending, and I have Nvidia hardware anyway… so I thought ok I’ll just supersample my framebuffer.

I’m using FBO’s and have a float variable that can be set by the user defining how much to supersample by. I thought for speed I’d try supersampling at between 1x and 2x (eg 1.5x). Now, this looks alright, but its nothing like 2x! I’ve tried supersampling at 3x and it actually doesnt look as good as 2x, I’m guessing because of the way the downscaling works. 3x is very slow anyway. At 4x I run out of memory (128mb card).

I was just wondering if anyone out there has tried something similar and maybe came up with a trick or 2 to force the hardware to read more samples as it downscales. Obviously I’m using GL_LINEAR for min filtering, without it the whole exercise is pointless :slight_smile:

I tried using different anisotropic filtering settings (upto 16x) but it didnt seem to do anything, I would’ve thought it’d read more samples, but now have remembered it of course depends on the angle you see the surface at in perspective, so it wouldnt do anything here.

Maybe theres a way of jittering… something…? :slight_smile: I thought of possibly blurring the 2x or say 2.5-3x supersampled frame, then scaling THAT down, which would indirectly introduce more samples, and may not look blurred at the lower resolution, but I have yet to try this out. Of course, this all hits you hard with speed too.

Any advice would be much appreciated :smiley:

And just by the way… I tried subtracting a 1x sampled image from a 2x sampled image… its funny how much bloody effort we go to just for about 100 pixels difference :slight_smile: (ok ok it depends on the image u diff with, but still! drawing 4x as many pixels for this? and its visibly soo much better!)

OK I tried changing my shader that reads from the supersampled framebuffer, and I’m sort of averaging neighbouring pixels into the downsampled one myself, since pure texture filtering from 1 sample wasnt doing much… it seems to have improved a lot! What I was wondering is, does anyone know EXACTLY what GL_LINEAR would do on min filtering when you’re halving the size? Does it just sample in a + pattern?

It’s not a “+” pattern, it’s a box filter.
The OpenGL spec Figure 3.10 explains that and the math is in Formula 3.26 on Page 174.


I’m currently looking into a variant of the same problem. I use an FBO with a regular rgba8 texture-rectangle attached, and specify MIN_FILTER to be GL_LINEAR (and MAG_FILTER to be GL_NEAREST for testing). When downsampling to exactly half the size four times, the result is quite bad. If I do manual filtering (manually subsampling four times for each pixels, corresponding to the four pixels in the input texture) the result is what I would expect it to be.

Ugly downsampled image


struct v2f {
  float4 position  : POSITION;
  float2 texcoords : TEXCOORD0;

void main( in v2f IN, in uniform samplerRECT difftex : TEXUNIT0, out float4 colOut : COLOR0 ) {
  float4 sum = texRECT( difftex, IN.texcoords );
  float4 sum = 0;
  sum += 0.25 * texRECT( difftex, IN.texcoords + float2( 0.25, 0.25 ));
  sum += 0.25 * texRECT( difftex, IN.texcoords + float2( -0.25, 0.25 ));
  sum += 0.25 * texRECT( difftex, IN.texcoords + float2( -0.25, -0.25 ));
  sum += 0.25 * texRECT( difftex, IN.texcoords + float2( 0.25, -0.25 ) );
  colOut = sum;


glTexParameteri( m_textarget, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameteri( m_textarget, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
glTexParameteri( m_textarget, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri( m_textarget, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
glTexImage2D( m_textarget, 0, GL_RGB8, width, height, 0, GL_BGRA, GL_FLOAT, 0 );

glGenFramebuffersEXT(1, &m_color_rb);
glBindFramebufferEXT( GL_FRAMEBUFFER_EXT, m_color_rb );
glFramebufferTexture2DEXT( GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, m_textarget, m_texno, 0 );

Does anyone have any idea about this? The extension-specs didn’t really help me here, and it doesn’t seem to be a driver-issue (testet on nv-7800glx, ati-x1900).

Kind regards,

Looking at the texture-rectangle-spec , it would seem that GL_LINEAR for texture-rectangles min-filtering really means floor. I haven’t gone into this thoroughly yet (reading the gl2-spec part referenced), but can anyone confirm this is so?

(3.8.8, pg. 171) “Texture Minification”
For non-rectangular textures,
let u(x,y) = wt * s(x,y), v(x,y) = ht * t(x,y), and w(x,y) = dt * r(x,y), where wt, ht, and dt are as defined by equations 3.15
However, for rectangular textures let u(x, y) = s(x, y), v(x, y) = t(x, y) and w(x, y) = r(x, y)."
Kind regards,