I was experimenting different things and testing their behavior in opengl and d3d. Normal texture mapping works fine in opengl and d3d without any exceptions. Similarly i have found no exceptions regarding the usage of VBOs and d3d vertex buffers. However a problem arises in the following scenario.
When i render a particular geometry to a render target in d3d (and likewise to an FBO in opengl) and apply it on a geometry the RT texture maps properly in d3d but maps mirrored (vertically i.e. upside down) in opengl. Please note that otherwise the same texture when given data from a file appears properly in both, with the same texture coordinates. Also note that no other filtering options other than linear min and mag filters has been specified. I can’t seem to figure out the problem. Because of this i have to invert the projection matrix along y in opengl (in case of render targets) so that the rendered image is consistent across both APIs. But inverting y axis of the projection matrix also inverts my culling option i.e. CW becomes CCW and vice versa. In short, its a pain and i want to get rid of it.
Also note that texture data, when supplied from file is done in exactly the same format for both APIs (i am using OpenIL for both), so there is no inconsistency from that end as well.
I’d be most grateful if someone would explain this behavior or refer to some documentation. I have tried reading FBO specification but could not find any relevant info. But that might be because the spec is absolutely huge . Thanks in advance.
OpenGL has a bottom left origin. Texture coordinate (0,0) is the bottom left corner of a texture. glTexImage2D reads data from bottom left to upper right, that’s for example how TGA images are stored.
Sounds like D3D has a top left origin, but I never cared, so somebody else should answer that.
OpenGL uses bottom left for everything (as relic said - window coordinates/screen coordinates/texture coordinates/data loading etc)
The reason this does not cause a problem with standard textures (against D3D) is that:
In D3D the load texture calls expects the data to be loaded top->down , and texture corrdinates are mapped top left.
In OpenGL the load texture calls expect data bottom->up and map from the bottom left.
So depending on your perspective, one API does a double flip when loading and using textures.
So in answer to your origional question, the FBO is being generated correctly, you just have the texture coordinates wrong for OpenGL.
So fix them for OpenGL and in the vertex shader for D3D do a 1-y on the y texture coordinate.
Hmmmm, changing texture coordinates in vertex shader is too stringent a requirement for any third parties that might get involved down the line. Is there any way to to change this behavior in OpenGL? Vertex shader is one example, i think that texture matrix can be made to work out as well, any other solution?
Invert the texture @ load time ?
I don’t think you can use a texture matrix. (doing 1-y in a texture matrix?)
I would just suggest choosing one format for your app. Either:
a) 0,0 texture coordinates represents the top left of a texture.(as in D3D) or
b) 0,0 texture coordinates represents the bottom left as in OpenGL.
Then when you load your meshes, flip(1-y) the texture coordinates appropiately for one API. (This will also mean you have to flip the textures on load)
This will mean that FBO’s and render targets should just “work”.
If you have to generate texture coordinates at runtime, this may be a problam if you accept arrays from 3rd party people. (Will have to process and flip at load time) But this may or may not be necessary according to your app.
FWIW, I would recommand try to ‘fix’ this by only flipping textures.
If you transform your uvs at load time by v’=(1-v), then you’ll have to account for that for every uv transformation you may apply at runtime and that may be cumbersome…