his should be simple to answer, suppose i have a point drawn in 3d space at 0,0,0
now suppose I want to rotate the point calling open gl’s built in routines for rotation which is homogeneous. Then the only parameter I pass in is a flag for the axis and an angle of rotation. After the rotation the point is now moved from 0,0,0 to a new location. So here is my dumb question.
If I need to perform collision detection and check for closeness to the camera for the point, then it seems I need to track the point location, so how do I get the new point location after a rotation.
PS: My exact issue is that I have a cube that is made of 4 faces and I want to highlight the face closest to the camera.
As far as i know, opengl only preforms the rotation on the stack, and not your data itsself.
you may want to preform rotation by yourself: all you need is some trig. Then you could use Pythagorean theorem to find which face is closest.
there’s probably another way, just wait it out, hopefully someone else will reply
I’m confused what you’re trying to do. You have a point at 0,0,0 and you want to rotate it. How so? Rotating a point around 0,0,0 will always end up around 0,0,0 since OpenGL rotates around the origin, so it is not moved to a new location.
Collision detection usually refers to two bodies in a system that are colliding. I believe what you want is mouse picking, where you want to find what’s under your mouse, correct?
Mouse selection is a whole can of worms by itself and they’re are a bunch of ways to handle it. One is color selection, OpenGL picking, or keeping an acceleration structure and build a ray from your camera and intersect it against the cube.
The Pythagorean theorem? Really? Could you explain that?
I think the question is “How to retrieve the position of a transformed point to perform collision detection.”
The answer is “You can’t!”
Well… you can, but is not what you want.
And let me guess, the camera is inside the cube.
Basically READING data from the GPU is a slow operation (I have to repeat it every three post) so once you send the data it’s better find another way to do your math.
You have a cube, and you know the point of the cube (because you know the dimension and you are the one that send the cube vertices to the GPU). You know the position of the cube in the space (because you are the one that send the position data). You know the camera position.
Just compute the transformation matrix of the cube using the angles and the translation.
Now you have two ways to go. Transform the cube vertexes position into the final position (world space) and then check the distance with the camera. Or transform the camera position into object space and check the transformed camera position with the cube vertex in local space.
Job done, have an ice cream.
If you have to do collision detection with a room the works is even easier, cause the room (usually) don’t move, so you only have to compute the final camera position.
*This approach works with ONE CUBE, if you have a lot of complex shape you better organize them into a hierarchical structure and first check an approximation of the shape (sphere or cube) then the faces. For collision detection against a complex room probably you have to organize the single faces into the hierarchy.
Yes that’s spot on. I should have said rotate a point [x,y,z] rather than [0,0,0]. Very good, I will have a look at what you have describe, I have to say it puzzled me for a moment to think that my points had been lost with no way to get them back, but I knew that there had to be a way, since just about any 3D game does it.