Object Looks at Cursor

So I want to rotate a 2D object so that it is always “looking” at the cursor.

I am using glad; glfw; glm; SDL2 for the mouse capturing;

I tried this quite a time now and I am definitly stuck here.

After some time I got the expression the best Idea would be to define a “Obj.LookatVector” which is pretty much just a vector from the center of the object to the cursor.
And whenever the cursor is moved I’d calculate the angle btw. the first and the second lookatvector.

In otherwords I’d imagine a line drawn from the center of my object to the cursor and it simply follows that line.

My current Code snippet looks like this:

    centerX = PlayerPos.x + PlayerSize.x/2;
    centerY = PlayerPos.y + PlayerSize.y/2;
    xoffset = xmouse - centerX;
    yoffset = ymouse - centerY;
    MousePointer = glm::vec2(xoffset, yoffset);

    angletemp = glm::atan(MousePointer.y/ MousePointer.x) - glm::atan(ObjLookAt.y/ObjLookAt.x);

    if(angletemp > 3,14159)
        angletemp -= 2*3,14159;
    if(angletemp < 3,14159)
        angletemp +=2*3,14159;

    angletemp = glm::degrees(angletemp);
    angle += angletemp;

    Triangle->update(PlayerPos.x, PlayerPos.y, angle);
    ObjLookAt = MousePointer;

Mousepointer is the vector from the center to the cursor.
ObjLookAt is initialised as (centerX, centerY-10) simply to simulate a “lookat” vector at the start.

This is my Matrixstuff going on; I return the angle to “rotate” in the rotate matrix in degrees.

    model = glm::translate(model, glm::vec3(position, 0.0f));                           

    model = glm::translate(model, glm::vec3(0.5f * size.x, 0.5f * size.y, 0.0f));       
    model = glm::rotate(model, rotate, glm::vec3(0.0f, 0.0f, 1.0f));                   
    model = glm::translate(model, glm::vec3(-0.5f * size.x, -0.5f * size.y, 0.0f));     

    model = glm::scale(model, glm::vec3(size, 1.0f));                                  

So what I am basically do is scaling my object to its size; translate it to its center, rotate it around the z axis, as it’s 2D, bringig it back to its origin and drawing it at its position.

However I cannot calculate the right angle.
My object rotates in very funny ways whatever method I tried until now (and I tried a lot), but not in the way I want it.

I know this is probably an easy thing, but as far as I have seen in in the virtual space most questions are tackled around moving objects in 3D or pushing them around by clicking on them etc.

So far I could not find any solution that seemed to tackle my problem (or I am simply not able to adapt it mentally to my problem)…

Any suggestions/ help?

You can do this more simply without trig functions or angles.

Back away and think about the problem this way for a second. You have a 2D object. In its own OBJECT-SPACE, it’s located at the origin and oriented along the coordinate axes. In this coordinate frame, it’s already oriented to face in one of the coordinate directions (let’s say the -Z axis, with +X right, and +Y up).

So what are you trying to do? You want to generate a MODELING transform to place this 2D object in WORLD-SPACE. MODELING transforms take you from OBJECT-SPACE -to- WORLD-SPACE. Right?

So what’s that MODELING transform need to do? This is composed of a rotation and a translation. The rotation orients the object from OBJECT-SPACE -to- WORLD-SPACE. And the translate just shifts the object into the right location.

First the rotation. The easy way to build a rotation is to generate an orthonormal frame of reference (i.e. 3 mutually-perpendicular unit vectors): 1) right, 2) up, and 3) back for that rotation. How do we get these?

a) You said you want to orient the object to face a point (call this the “lookat” point) from some other point in space (call this the “object location” point), right? Subtract those two points and normalize the result, and that’s your “forward” vector for your object. It’s negation is the “back” vector for your object.

b) Now you’ve got some “generally-up” direction in space you’d like your object’s up vector to align with, right? Cross the “generally-up” vector with the “back” vector, and normalize the result. This is your “right” vector for your object.

c) Finally, cross your “back” vector with your “right” vector and now you’ve got your real “up” vector for your object.

Take these 3 vectors (right, up, back), slap them into the upper-left 3x3 of a 4x4 matrix, and that’s the rotation component of your MODELING transform.

So what’s the translation component of your MODELING transform? That’s easy: you’ve already set that. It’s your “object location” point.

You’re done!

If you want some code to look at, take a look at this: gluLookAt code (GL wiki). Don’t pay too close attention to where it’s stuffing the values into the matrix though, as this is actually generating a VIEWING transform (aka “Inverse” MODELING transform for the camera), so the way it’s populating the matrix with (right,up,back) is transposed from what you’ll be doing.

Thanks a lot! This way it sounds much more intuitive now… I’m feeling a little bit stupid, especially because I did the same thing with a camera already, but it didn’t came to my mind to adapt that method to an object. But I guess that is how you learn stuff.


No, don’t feel that way. It’s something I had pointed out to me by a book I read and probably a teacher or two.

Just going forward, keep in mind that you can nearly always build whatever coordinate transforms you need with just intuitive vector algebra (cross and dot products, etc.). That is, without messing with any nasty Euler angles or trig functions, which is good because these can lead to ugly range and singularity problems, precision issues, gimble lock, and higher computational cost because of the trig functions.

Ok… I tried it today and I guess I have one more question.

I calculated the “Object Coord. System” (Back, Right, Up) as you said and plugged it into a matrix as:
Right.x, Right.y, Right.z, 0
Back.x, Back.y, Back.z, 0
Up.x, Up.y Up.z, 1

Which initiates as an Identity Matrix in the first Render with
Right.x = 1
Back.y = 1
Up.z = 1

So my object is simply rotated “towards” the top of the my screen (the real existing one, meaning it is “looking” towards my real existing ceiling)

I then wrote a “Rotation” function that should recalculate the coordinate systems relative to the mouse cursos -> so that vector “Front” always directs from the center of the object towards the mouse.

TUp is a general Up Vector(0.0f,0.0f,-1.0f), as I am in 2D and "bird-eye perspective, there is change “no object up” (right?).

    Front = -Back;
    centerX = PlayerPos.x + PlayerSize.x/2;
    centerY = PlayerPos.y + PlayerSize.y/2;
    xPos = xmouse-centerX;
    yPos = ymouse-centerY;

    Front.x = xPos ;
    Front.y = yPos ;
    Front = glm::normalize(Front);
    Back = -Front;

    Back = glm::normalize(Back);

    Right = glm::cross(Back, TUp);
    Right = glm::normalize(Right);

    Up = glm::cross(Back, Right);
    Up = glm::normalize(Up);

After that I give the Vectors into a Matrix I called Rotation for the sake of simplicity, ordered by the way written above, and plug it into my computation.
However I am a little buffled where to put it.

    glm::mat4 model;
    glm::mat4 Rotation;

    Rotation[0] = glm::vec4(Right, 0.0f);
    Rotation[1] = glm::vec4(Back, 0.0f);
    Rotation[2] = glm::vec4(Up, 0.0f);
    Rotation[3] = glm::vec4(0.0f,0.0f,0.0f,1.0f);

    model = glm::translate(model, glm::vec3(position, 0.0f)); 

    model = glm::scale(model, glm::vec3(size, 1.0f));

    this->shader.SetMatrix4("model", model);

intuitively I’d just set model = rotation, as the matrix calculated above would be THE object matrix all other comps are based on? Or am I wrong here?

In any case the resulting behaviour is just crazy, which leads me to the point that I’d didn’t get it at all…

As far as I got it, openGL takes the vertex position, “modelspaces” it, converts it to my viewpoint and then plugs it into perspective?
So the first thing done is to create the model, do all the stuff with the model that needs to be done, set it into the world, do all the stuff that needs to be done with the world and then serve me my wonderful creation in the right perspective.
Any errors here?

My Vertexshader calculates the Position as follows:

Position = Projection * View * model *vec4(Pos.xy, 0.0f, 1.0f);

Projection is glm::ortho and view is simply a little camera position that is centred above my object ( so that it follows it through the dark space of beginner openGL window background)

So besides the fact that I am clueless where to put my matrix in the matrix calculations in the C++ Rendering Code, I am curious if I need to change the vertex Data?
I guess the answer to this is no, as all the translation/transformation to the object should happen within the shader/ through matrices, but as everything behaves strange, there came some doubts to my mind.

So the question:
Do I alter the Object Coordinate System and do my Calculations on them or
Do I simply multiply them into my model identity matrix and if so, before or after the scaling?

Do I alter the vertexData at all - meaning the hard coded vertexArray - or does it all happen through the matrices?

Thanks in advance!

[QUOTE=Labidusa;1288467]I calculated the “Object Coord. System” (Back, Right, Up) as you said and plugged it into a matrix as:
Right.x, Right.y, Right.z, 0
Back.x, Back.y, Back.z, 0
Up.x, Up.y Up.z, 1[/quote]

Right = X
Up = Y
Back = Z

Start by swapping Back and Up above and see where that leaves you.

Well I switched Up and Back so the matrix is now:
(X) Right.x;Right.y;Right.z;0
(Y) Up.x;Up.y;Up.z.0
(Z) Back.x;Back.y;Back.z;0

however it still stays weird as soon as the cursor comes in.
Especially now my right vector returns a z-value when being the product of back(z) and up(y)?
actually now my object simply disappears.

Am I doing it the right way?
I set initial vectors:
Right (1;0;0)
Then I get the mouse position, substract it from the objects’ center, update my target vector (“Front”) with the resulting X & Y coordinates and negate it to the Back vector.
Then I do the calcs as you have written above.
But setting Back as the Z vector makes the whole calculation kinda strange, doesnt it?
Right X Back would lead not to an Up vector that describes the Y axis on the screen, as it should do now in the matrix?

Or in other words:
My Screen has right as x and then Y (as I am in plane 2D). So intuitively movement and target are on the Y axis, rather than the up axis, aren’t they?

I feel like I am overlooking something damn obvious here, but I cannot find it…

Just to be clear about this:
I want an object, e.g a triangle, to always point with its “top” towards my cursor when I move it around the screen.

I think there’s a disconnect here somewhere. Draw a diagram with orthonormal right, up, and back vectors as a coordinate frame. Using the right-hand rule, cross( right, up ) == back. cross( up, back ) == right. cross( back, right ) == up. Got it?

Above it sounds like you’re saying cross( back, up ) == right (??). It doesn’t. cross( up, back ) = right. cross( back, up ) == -right == left.

Setting Back as the Z vector makes the whole calculation kinda strange, doesnt it?
Right X Back would lead not to an Up vector that describes the Y axis on the screen, as it should do now in the matrix?

I’m sort of confused by this.

Let’s try this again, with a bit more preciseness and an example:

------------           -----------
     +Y                     up
     |                      |          
     |                      |          
     |                      |          
     |______ +X             |______ right
    /                      /           
   /                      /            
  /                      /             
+Z                     back

  Maps +X -> right
       +Y -> up
       +Z -> back


[li]+X, +Y, +Z are OBJECT-SPACE axis vectors. [/li][li](right, up, back) are the WORLD-SPACE vectors to which they map. Call them (+U, +V,+W) if that works better for you. [/li][/ul]

Now let’s do an example:

right = (0,0,1)
up = (0,1,0)
back = (-1,0,0)

or graphically:

------------           -----------
     +Y                     +Y
     |                      |          
     |                      |          
     |                      |          
     |______ +X             |______ +Z
    /                      /           
   /                      /            
  /                      /             
+Z                     -X

Now, build your matrix:

    [  0 0 1 ]
M = [  0 1 0 ]
    [ -1 0 0 ]

If you write your math like this: v1 * M = v2, then you can see that

v1 = +X = (1,0,0) --> v2 = (0,0,1)
v1 = +Y = (0,1,0) --> v2 = (0,1,0)
v1 = +Z = (0,0,1) --> v2 = (-1,0,0)

which is exactly the mapping we want.

Now if you write your math like this: M * v1 = v2, then the M you want is transposed from the one shown above.

So if you’re wondering whether you’ve got the right matrix, just pipe the +X, +Y, and +Z OBJECT-SPACE vectors through the transform and make sure they end up correct in WORLD-SPACE.

First off: Thank you very, very much for the kind help and the time you sacrifice for this!

Okay, so I did this in Excel as it seems there is a problem with my math(understanding)…

I set my cursor position to x:430 y: 300 -> that is the coordinates I’d receive from sdl2.
My Object is by default rendered in the middle of the screen, top Left of it is x:400 y:300.
its size is 12x12 pixel, so the center coordinates are 406/306.

This gives me the following:

x y z
Cursor 430 300 0
Center 406 306 0
Object Front (-Z) 24 -6 0
Object Back (+Z) -24 6 0

Then I Normalize the resulting Objects’ Back vector (Z).
Generally Up (0,1,0) x Objects’-Back (Z) -> Objects’-Right (X) & normalize it
Objects’-Right x Objects’ Back -> Objects’ Up (Y) & normalize it.

Which leaves me with the following Object vectors - reffering to +X +Y +Z if i did not get that wrong:

Name x y z
Object Right (X) 0 0 1
Object Up(Y) ~0,24 ~0,97 0
Object Back (Z) -~0,97 ~0,24 0

Now I create a Matrix as you said:
to “twist” my object vectors into world space vectors by v1*M=v2

Plugging the world coordinates into a matrix leaves me with the following model matrix:

Model x y z w
WorldRight 1 0 0 0
WorldUp 0 ~0,97 ~0,24 0
WorldBack 0 -~0,24 0,97 0
0 0 0 1

I have a scale matrix S to scale the object to 12x12px & a transform matrix T to render it originally in the middle of the screen (400;300)

I create an Identity matrix I. For simplicity I call the model matrix above model and the matrix I create TEMP.
I write it in the order I multiply it - knowing (assuming?) that in code it has to be the reverse direction (last multiplied, first “coded”).

Multiplied from right to left:
Scaling: TEMP = S * I;
Rotating: TEMP = model * TEMP;
Transform: TEMP = T * TEMP;
(In the shader it would go like this: Position = projectionviewTEMP*vec4(vertex.xy,0.0f,1.0f) )

The resulting TEMP matrix draws my triangle with z coordinates into the world screen. As I am in Bird perspective in 2D that results in a rather strange picture:

Manipulated vertices x y z w
Top Left 400 300 0 1
Bottom Left 400 ~311,64 -2,91 1
Bottom Right 412 ~311,64 ~-2,91 1
Top Right 412 300 0 1

So even though in 3D it “rotates”, from a 2D orthogonal bird perspective it just looks akward.
What I want to achiev is that e.g. I set my curor exactly 90° from the origin view point (x:400|y:0) to x:430|y:306.

Right now my that leads the matrix to just draw my triangle as it was in th beginning, with the top left at 400|300.
However that should be the bottom left now, while top left should be 412|300 -> everything should be switched one position clockwise:

Default x y
Top Left 400 300
Bottom Left 400 312
Bottom Right 412 312
Top Right 412 300

With the Cursor on 430|306 == 90° from the center:

90° turn x y
Top Left 412 300
Bottom Left 400 300
Bottom Right 400 312
Top Right 412 312

Does that work with the local coordinate system, but without calculating “fixed” angles?