# Manual OpenGL projection

Hi. As a part of my project, I have to do the following:

I wanna find the window coordinates of a 3D point using the currently selected OpenGL projection/modelview matrices.
I thought about doing the transformations mathematically but that seems fairly complicated and I have no idea about how this can be done. I also thought about rendering a single color pixel on an offscreen buffer and then checking the final position but that would be quite insufficient.

I was wondering if there is a standard way that OpenGL provides for doing that.
My language is C++ with Qt though native OpenGL code is just fine.

Check this out. It’s not that hard.

I read the topic but it’s too complicated and not hardware accelerated. Is there any native OpenGL way to do this?

It’s not complicated and it’s not a performance issue. At most, you need to calculate the projection matrix once per frame - that’s nothing in comparison to what happens afterwards in the frame. The same goes for the view matrix. If you don’t get the math you should read it again and again until you do. This stuff is essential. Putting it off as “too complicated” just makes you seem lazy, that’s all. And do you really think calculating a few matrices and transforming a single point is even worth considering?

Legacy matrix operations aren’t accelerated as well. What is done in hardware, of course, are matrix operations done in a shader - like multiplying the model, view and projection matrices.

You can use glGetFloatv to retrieve the current modelview and projection matrices and apply them. You’ll still need to do a bit of matrix multiplication yourself but at least it saves you from having to do the calculations manually, and ensures that what you’re getting is the same as what’s being used.

Note the following.

This uses the old matrix stack functionality that is deprecated in modern OpenGL. It’s therefore got a dependency on your program also using the stack; if you don’t (or if you ever plan to move away from it) you’ll need to find alternates.

glGet calls are normally considered expensive as they normally incur a readback from the hardware. That means that the entire pipeline has to stall and drain in order to bring things up to date on the hardware, and that can completely wipe out performance. But — in the case of the matrix stack please see the next point.

The OpenGL matrix stack is not hardware-accelerated either. OK, maybe on the ancient SGI hardware that OpenGL was originally implemented on it may have been, but on modern consumer-level hardware - nope, it’s not. Instead it will do the calculations in software, setting dirty bits if anything needs updating on the hardware, and - when a draw call comes along - if a dirty bit is set it will re-up. So, in relation to those glGet calls, you’re not actually going to be doing a readback from hardware; you’ll be reading a copy stored in system memory by the driver. So that means that this particular glGet should not require a stall. But all of that last discussion comes with an “implementation-dependent behaviour and should not be relied on” warning.

Here is a simpler version Vertex Transformation - OpenGL Wiki

Hardware accelerated? huh?