Converting DirectX transformations to OpenGL

I’m currently porting an application from DirectX to OpenGL, and the application uses of course the DirectX coordinate system (i.e. left-handed world, camera views down along +z, projection projects into 0…1 z Range.)

As a lot of code already depends on that, I need a “minimal-invasive” solution for the OpenGL backend. So far, I can capture all view/world/projection matrix setting calls.

I’ve changed the projection matrix to a left-handed one which however uses -1…1 as the depth range, and swapped the z-axis of the view matrix. I also set the transpose (I don’t quite understand why I need to use the transpose flag, as I actually transpose all matrices for DX, as my runtime-storage is transposed w.r.t DX), which gives me some transformation, but it seems to be way off. I also mirror the world matrix at the YZ plane, to get into a right-handed coordinate system.

My understanding is that the view matrix and the projection matrix change should actually not make a difference if I keep using a view which looks down +z and a projection matrix which maps to -1…1 (!), as the result should be that the world is mirrored twice – I would only have to change the depth test. For some reason or another though, I don’t get this working.

What is the recommended way to translate from DX to OpenGL? I’d really like to do this as late as possible in the pipeline (ideally, when setting the shader parameters), but I can’t quite figure out what the minimum set of required changes is. If there’s some DX to OpenGL guide, I’d be really interested in that.


I’ve only ever gone in the other direction, but I found it very painless and actually quite enjoyable to do. What I would recommend is that you convert your matrixes to RH first, before doing absolutely anything else, as D3D has a set of RH matrix transform functions that can be used. This would enable focussing on that part of the job without needing to be distracted by anything else.

You can also get OpenGL to use a LH coord system by following the advice on this page:

OpenGL doesn’t have a mode switch to change from right- to left-handed coordinates. However, you can easily obtain a left-handed coordinate system by multiplying a negative Z scale onto the ModelView matrix. For example:

glMatrixMode (GL_MODELVIEW);
glLoadIdentity ();
glScalef (1., 1., -1.);
/* multiply view transforms as usual... */
/* multiply model transforms as usual... */

(I haven’t tested this myself, mind.)

The glScalef approach above does work. I ported my code over a few months back.
However, there are some subtle differences involving sign changes and [0,1] <=> [-1,1] that you will run into if you try to build your own projection matrix.
I also got into lots of issues dealing with shadow map projections, but, in the end, they all came down to the differences in the view and projection matrices.
So try the glScalef for starters.
Another very useful trick was using opengl functions to set up the matrices and then calling glGetFloatv (GL_*MATRIX,…) to look at the differences between opengl’s matrices and mine.
I still do all my calculations, physics, etc… in the left handed system…

that’s a great idea, and a very good way of determining if you’re doing things right.

…and I universally use RH in D3D! :slight_smile:

Another tip - D3D has separate world and view matrixes, whereas OpenGL concatenates them both to a single modelview matrix. If you can modify the original D3D code to use an identity transform in the view matrix and include all of it’s view transforms in the world matrix it will make things a hell of a lot easier for you.

Hm, I still don’t get it. Let’s assume for the sake of simplicity, I want to stick as much as possible with the LH system (i.e. keep the world and my view in LH.) Do I need to fiddle around with the projection in that case in some special way?

My understanding is that I should be able to keep world and view in LH, and just set a standard projection matrix, as long as things wind up in front of my camera. I.e. the only thing that I might have to do is to add a final swap for the z-axis just before projection? For some reason or another, this does not work … I kept all matrices as before, and just use gluPerspective to get a correct projection, but I wind up seeing nothing.

If you are making opengl calls to form the projection matrix (i.e. gluPerspective), then doing the glScalef on the modelview matrix is all that you need to do.

If you start building your own projection matrices, do view space operations such as culling or shadow mapping, then you go to the next level and get into some of the [0,1] <=> [-1,1] and perspective flip business.

…and I universally use RH in D3D! :slight_smile:

Funny! Must be an aerospacer, perhaps :slight_smile:

I’m using my DX style view and world matrix (in my case, world is identiy, and view is a standard LH look-at.) I added a swap to the z-axis of the view projection, which should be equal to the glScalef … still I’m getting some very weird results. My objects end up in front of the camera were I would expect them, but it seems like my projection is skewed somehow. Are there some more hidden caveats when porting from DX to GL? Any idea how I could debug this? There’s no difference in using gluPerspective of my custom perspective projection matrix, so I assume this one is correct – my view matrix should be also right (same as in DX.)

One mistake I had in my code is that for DX, I concatenated the matrices as worldviewprojection, while I need projectionviewworld for GL, so when I do the wvp*point multiplication, it gets multiplied against the world, then the view, and then the projection …

The view matrix is easy. Your directX matrix will work if you negate the third column (same as glScalef). The projection matrix is much more insidious. Opengl clip space is z = [-1,1] whereas dx is [0,1]. This changes the projection. Also, the opengl projection flips the z coord (view space is right handed, but clip space is left handed).
Definitely try using gluPerspective and glGetFloatv to see how the matrices differ. I’ve been through the derivation of the opengl projection matrix, but would need a pencil and paper to do it again. Can put up some code maybe when I get home…

Got it working: The only changes I had to do from DX to GL:
[ul][li]Change the order of matrix multiplications (obviously), i.e. instead of world * view * projection its projection * view * world[*]Change the output range of the projection matrix. This can be actually folded into the computation, so ProjectionMatrixLH (…, 0, 1) gives you the DX matrix which maps things at the near plane to 0 and far to 1, and ProjectionMatrixLH (…, -1, 1) maps near to -1 and far to 1.[/ul][/li]That’s all. Ah yes, and glProgramUniformMatrix4fv expects column-major matrices, you need to set the transpose to true if your matrix is row-major (in my case, I use row-major on the host, so I have to transpose for GL and DX …)

Ah yes, and glProgramUniformMatrix4fv expects column-major matrices, you need to set the transpose to true if your matrix is row-major

First, there’s no such function, so it can’t expect anything :wink: Second, the function you’re probably talking about, glUniformMatrix4fv has a parameter to tell it if the matrix you’re providing is in row-major or column-major order.

There is this function:

It’s part of DSA, and has been folded into OpenGL 4.