I am an opengl (es) beginner working on an app which uses a sort of head tracking to display an anamorphic view of my gl world so that it appears to be 3d…

For example lets say I have a rectangular fish tank which is in front of the camera so that I can see the top and two sides, however what I want to display is the rectangular top opening of the tank correctly filling my screen so that when I look at the screen from the side it appears to have depth…

Understand? I wish my screen to be the top opening of the fish tank but the contents (fish) displayed as if the camera was in front of the fish tank…

You need head tracked projection on a fixed screen.

You measure where the eye is. You use the known geometry of the screen and the position of the eye to:

define your viewing frustum on the projection matrix

position the eye on the modelview matrix.

the view vectore remains orthogonal to the projection plane
you DO NOT rotate the eye.

If you have stereo then it still works, the results of 1 and 2 will be slightly different for each eye.

People totally screw this up in all sorts of creative ways but all correct 3D projection boils down to this and it matters most when doing stereo. It is almost never set up correctly for non-stereo viewing.

With head tracking you merely update the input to 1 and 2 each frame using the tracking information (and the eye position ofset as applied to the tracked frame).

P.S. anamorphic is possible but it’s merely an aspect ratio / viewport issue applied to the projection calculation.

In this case your viewport fills the screen exactly, the projection geometry matches the screen exactly, you’re done. If that leads to an anamorphic projection due to the geometry it will just happen.

I think Anamorphic is the right word for what the final image looks like but correct termonology is “off axis projection” from what I have read… here are some simple pics to explain furter

This is the perspective of the fish tank (no fish) which I want

However I really want my view/screen to completley (and correctly filled/skewed/rotated) display the top edge of the tank

Theres an iPhone app called HoloToy which uses this effect but from what I can see not exact enough, Im programming this for Android mobile’s just cant get the view to stick on the edges of the fish tank…

That is where the whole fake head-tracking shows its limitations

You will have to assume that the viewer is at a certain distance (something around 30 centimetres), and convert polar theta,phi,radius to a relative x,y,z position.

@ghdfans2010 - I have friends… and all of them have straight hair…
interesting kind of spam in an opengl forum…

@ZbuffeR - Im not really tracking the head I said its sort of head tracking, its a mobile device with gyroscope giving me a pitch roll yaw, at the moment im rotating the camera point with the PRY then translating the 3d point to 2D and measuring the XY differnce

final double rX = Math.toRadians(roll);
final double rY = Math.toRadians(pitch);
final double rZ = Math.toRadians(yaw);
final double x = 0, y = 0, z = 10;
double dx = 0, dy = 0, dz = 0;
// --> rotate around y-axis
final double tempX = (x * Math.cos(rY)) - (z * Math.sin(rY));
final double tempZ = (x * Math.sin(rY)) + (z * Math.cos(rY));
// --> rotate around x-axis
dz = (tempZ * Math.cos(rX)) - (y * Math.sin(rX));
final double tempY = (tempZ * Math.sin(rX)) + (y * Math.cos(rX));
// --> rotate around z-axis
dx = (tempX * Math.cos(rZ)) + (tempY * Math.sin(rZ));
dy = (tempY * Math.cos(rZ)) - (tempX * Math.sin(rZ));
final double focalLength = (farZ - nearZ);
final double scale = focalLength / (focalLength + dz);
final float[] xy = new float[2];
xy[0] = (float) (dx * scale);
xy[1] = (float) (dy * scale);

Its not working exactly as I planed, I think im measuring the focalLength wrong but its not just that, any tips or sources for better information would be appreciated…

The relationship of the viewer to the screen is fully 3D when the head is tracked.

These are then used as modelview eye offsets in eye-space in the virtual world when flying around.

That page also shows the “toed-in” stereo, which is bullshit. Only asymmetric frustum is correct for the vast majority of setups, which thankfully it also shows. There’s not nearly enough emphasis on the fact that one is complete bollocks and the other is correct.

Hi, trying to follow through your most interesting discussion.
It seems however to me that there’s an error in the sample code by GaryD.
The frustum should be given left, right, bottom, top, whereas here top and bottom are inverted:
:
frustum[2] = top + fsY;
frustum[3] = -top + fsY;
:
Might save someone sleepless nights to correct this

Did you manage to correct this? I have started to play a bit with your code (seems we’re trying to do similar things), but I have weird “phenomena”.

Basically the problem is indeed tilting the projection plane, while the eye stays fixed. What I don’t quite understand is whether the Camera upVector needs to be rotated, too, and whether the camera direction should be tilted as well (I have found contradictory opinions on this latter issue, tried both myself, but certainly went wrong somewhere as neither worked as expected).