# Roll pitch yaw values for OWS depiction

i am getting roll,pitch,yaw in euler angle as body axis values from a simulator engine for a flying aircraft.I need to use these values to depict this rotation by giving effect on outiside world scene as seen by a pilot. i m not getting that should i use these angles direclty in glulookat function or do some trnaforamtions???

It depends. Just think it through.

Is this “flying aircraft” your eyepoint? So you want to render the world from its perspective?

`gluLookAt()` is typically used to compute the VIEWING transform for your eyepoint. This transforms objects from WORLD-SPACE to EYE-SPACE (sometimes called VIEW-SPACE) for rendering. To call `gluLookAt()`, you need to provide:

1. The location of the eyepoint (in WORLD-SPACE),
2. The location of a point in front of the eyepoint (in WORLD-SPACE), and
3. A vector generally “up” in eye-space (in WORLD-SPACE)

So as you can see, WORLD-SPACE positions and vectors are provided as input. A few useful links:

roll,pitch,yaw on-the-other-hand are defined in some ground-based tangent frame. For instance, East-North-Up (ENU). That’s what your yaw, pitch, and roll are relative to. Often times, your WORLD-SPACE is not this ground-based tangent frame, but some geocentric frame, such as Earth-centered Earth-fixed (ECEF).

And of course roll,pitch,yaw don’t tell you where the thing is (position) – just how it’s oriented in space (rotation).

So if you want to render the world from the perspective of this “flying aircraft”, you need to know:

• where the aircraft is (in WORLD-SPACE), ← not given above
• how it’s oriented (in WORLD-SPACE), ← roll, pitch, yaw transformed from ground-based space to WORLD-SPACE.

With that, you can compute the inputs for `gluLookAt()` to compute your VIEWING transform from the perspective of the “flying aircraft”.

Another way to think about the VIEWING transform…

The VIEWING transform is just the inverse MODELING transform for the eyepoint. So if you compute the MODELING transform for your “flying aircraft”, and invert it, you’ll have the VIEWING transform. No need to even call `gluLookAt()` or figure out how to compute its inputs.

Why does this work?

• MODELING transform = OBJECT-SPACE -to- WORLD-SPACE transform
• VIEWING transform = WORLD-SPACE -to- EYE-SPACE transform

If we’re talking about the eyepoint being the "object’, then…:

• MODELING transform = EYE-SPACE -to- WORLD-SPACE transform, so
• MODELING^1 transform = WORLD-SPACE -to- EYE-SPACE transform

which is the VIEWING transform, by definition!