framebuffer object - height reversed

Hi, I am building a game with abstracted APIs, i.e. I can use both Direct3D and OpenGL. So far everything works fine - except Render-To-Texture. Some shots follow:
http://dv.dword.org/stuff/rtt_d3d9.png
http://dv.dword.org/stuff/rtt_opengl.png

Note that the triangle usually is oriented exactly as in the D3D version (i.e. the blue corner touches the upper frame border), so if I render that triangle directly to screen, it looks like in the D3D shot. I can exclude both the projection matrix and the uv coords, because normal textures work fine. This only happens with FBOs. Anyone has an idea?

no.

I mean, can you give more details ?

Well, it seems that 0,0 is on the top left corner of a FBO, even though it is the bottom left corner when rendering the tirangle directly to screen; rendering the triangle directly to screen works correctly both in opengl and d3d (e.g. the blue corner looks upwards). I use the same projection matrix I use for rendering the quad. Viewport is set to 0,0,texwidth,texheight, then the triangle gets rendered, then it is reset to 0,0,displaywidth,displayheight. I make sure that no texture is bound before the triangle rendering, and I unbind both frame- and renderbuffer after rendering the triangle. Worldview matrix is identity when rendering the triangle, and a translation when rendering the quad. There are zero changes between the d3d and the opengl version, all I do is loading a different rendersystem (ogldrv instead of d3d9drv).

It’s probably your texture coordinates. 0,0 is the bottom left of the texture in OpenGL, where in D3D it’s the top left. That’s my guess.

  • Kevin B

Then why can I render entire Quake3 maps with textures rendered correctly? I cannot rule out the texcoord issue, but its very strange that it only appears on FBOs.

This is normal behavior. OpenGL addresses textures bottom and up, while D3D addresses them top and down. This doesn’t matter for glTexImage2D() though because glTexImage2D() also assumes you’re passing the bottom line first. So essentially it becomes the same thing (double-negation if you will), the texel line addressed with t=0 is the first line in both OpenGL and D3D. But for render targets where you’re filling it with rendering commands you’re not as fortunate and the difference will have to be taken care of by the application. A good way to handle that is to flip it in the projection matrix. In OpenGL you also have a glFrontFace() command that allows you to set which winding is frontfacing, which makes it easier to adjust things on the GL side if you want to support both APIs.

Ah, I see. I guess simply multiplying the 2,2 value of the projection matrix by -1 should do, right? (The matrix’ topmost member being 1,1.)
With this stuff works now with 2D framebuffers, but I still can’t get the cube shadowmaps to work, I get weird artifacts. (Cube shadowmapping is the real app I’m dealing with, the shots above are from a test application.) Still looking into it, but is more necessary when dealing with 3D texcoords?

EDIT: I’m using the ARGB-distance-packing you used in ShadowsThatRocks (e.g. decomposing the distance into four color components, thus being able to make use of the widely supported 32-Bit ARGB pixelformat). The artifacts look like precision errors - they only appear when flipping the projection matrix.

EDIT2: Using a slightly greater zbias eliminates most artifacts, but its not 100% perfect. Humus, do you have any suggestions regarding the ARGB packing?

Instead of manual matrix modification, could glScale be of help? I’ve used it myself to help porting stuff from D3D to OpenGL with satisfactory result.