I have a multisample OpenGL viewport (when creating the viewport I ask for a pixel format with multisample to wglChoosePixelFormatARB) and my geometry is displayed antialised.
Now, I read the backbuffer inside a texture that I later use use for a quick repaint of my scene in some situations.
The texture is createad with NEAREST min and mag filters and has the same size of the viewport, so no magnification and minification is involved.
On ATI cards the texture is exactly equal to the scene on screen (so when I use it for the repaint I don’t notice any change on screen), but on NVidia cards they are not the same: the lines colors become a bit more intense in the texture.
What could be the problem and how could I fix it?
If I use a pixel format without multsampling there’s no difference from the image on screen in both ATI and NVidia.
Have noticed this. Seems that the NVidia driver isn’t using the same downsample filter as the GPU for the ReadPixels path vs. the SwapBuffer path.
You could try doing a manual resolve/downsample via glBlitFramebuffer to a single sample texture (which should ideally happen on the GPU), and read back the texels from that. That might produce the desired results.
Just a thought,
why use RenderbufferStorage. Render to an offscreen texture buffer that you can blit any time you like to the back buffer. Now even if there is a difference between AMD and nVidia versions, at least it is consistent on a card whether it is the first render or the “fake” blit.
glCopyTexSubImage2D reads from the READ_BUFFER of the READ_FRAMEBUFFER. It doesn’t look like you’re reading from the framebuffer I think you intended to (see glBindFramebuffer() and glReadBuffer()). Looks like you’re copying from the MSAA framebuffer as before here.
And agree with tonyo_au. Just blit to an FBO bound to a texture. Then you don’t need the CopyTexSubImage to get it into another texture.
Also yes, binding FBO 0 reverts to the system framebuffer. I’m presuming that’s the one you’ve got that’s multisampled.
If you still have issues, you could see if the same happens when rendering to an off-screen MSAA FBO. There may be something about your system framebuffer that’s “special”.
So is there no way to get a pixel perfect capture of a Multisampled framebuffer on NVidia cards?
If I was doing antialiasing by myself (through Accumulation buffer) would I still get this problem? (I’m afraid of the penalty of having to draw the scene multiple times, though…).
I could also always draw in the multisampled FBO and copy it to the backbuffer at every frame (instead of drawing directly to the backbuffer for standard frames and use the draw to FBO + copy to Backbuffer for the quick repaints at the end of a movement of the scene).
This way I would always pay the cost of the copy from FBO to BackBuffer (is it much?), but the frames would be consistent because I would be drawing them always in the same way.
To your question, obtaining exactly the same output relies on either 1) you replicating exactly the downsample filter that NVidia is using, or 2) just using NVidia’s downsampling. #2 is probably the safest since #1 is potentially vendor-specific magic.
As far as obtaining the actual framebuffer samples, you can read back the actual subsamples from an MSAA FBO (possibly with help of a tiny frag shader). You can also read back the downsampled pixel values of course. So getting the actual data pre- and post- isn’t a problem. You canalso query sample positions IIRC. But you still come back to needing to replicate the downsample filter being used (or just using the one that’s built-in).
I could also always draw in the multisampled FBO and copy it to the backbuffer at every frame … This way I would always pay the cost of the copy from FBO to BackBuffer (is it much?), but the frames would be consistent because I would be drawing them always in the same way.
Should be fairly cheap. I mean, the GPU has to downsample the image to render it anyway, and if you do timing in an MSAA video mode you’ll see that this is rolled into your SwapBuffer time. So forcing your own downsample via glBlitFramebuffer and blitting from an MSAA FBO to a single-sample system FB should just be making this downsample explicit in your code.