“World” coordinates usually means after transformation by the model matrix, before transformation by the view matrix, projection matrix, projective division, and viewport/depth transformations.
What i am trying to achieve is use a coordinate system where the number 1 represents 1 pixel on the screen so i can move my objects around with a simpler coordinate system than NDC (not sure what 1 pixel is equal to using NDC) then when i need to pass it to glm’s translate function i first convert it to NDC.
is there some math operation i can apply to a vec3 holding a position in window coordinates (where 1 represents 1 pixel) to convert the vec3 to NDC, or do i have to use all the info you provided to do it manually? I was told by chatgpt that if i multiply a vec3 containing window coordinates by the view and projection matrix i would get a vec3 in NDC but i haven’t tried it yet. I intend to use glm perspective projection. Thanks a lot for your help.
Use glm::ortho(0, width, 0, height) as the entire model-view-projection matrix, where width,height are the viewport width and height.
[quote=“L4BR4T, post:3, topic:111604”]
I was told by chatgpt that if i multiply a vec3 containing window coordinates by the view and projection matrix i would get a vec3 in NDC but i haven’t tried it yet.[/quote]
No, that converts world coordinates to NDC.
If you want to operate in pixels, you don’t want to be using a perspective projection. A perspective projection has a scale factor which varies with depth.
So if i’m using perspective projection, and i have a cube that i want to move on the screen, i have to do so using NDC to move it? I’ve noticed game engines allow you to move in every axis (including the z axis) by the amount of units you desire, so there must be a way to move using another coordinate system other than NDC.
The reason why i think using NDC to move is not good is because i don’t know how to move by the same amount of units in all axis, because the window is a rectangle, not a square, and there is also the z axis which is another concern. There must be a way to achieve something similar to what game engines use that isn’t NDC.
If you’re using a perspective projection, you can’t move the cube by “1 pixel” because any motion in 3D space will cause the points nearest to the viewer to move by more than the points farther from the viewer (parallax).
The projection matrix is normally constructed to compensate for this. E.g. glm::perspective has an aspect parameter. Note that if the pixels aren’t square (e.g. if you choose a 16:9 video mode on a monitor which is physically 16:10 or vice versa), you have to decide whether to specify the aspect ratio in pixels or physical units. One will result in squares which are the same number of pixels high as wide, the other will result in squares which are physically square.
Hi L4BRAT,
It sounds like you are learning the complexities of real time 3D graphics.
Lets see if we can help - but first we need more info from you to help:
(1) is your actual goal to move a 3D object around freely in the world space - a typical 3D virtual environment - like in a 3Dgame engine/editor?
(2) how do you want to move it - drag it with a mouse, enter a position in a dialog box or press some keys to move a single unit in some/any defined direction in the world?
(3) are you using a model-view-projection matrix already to draw the object on the screen?
(4) do you want to just do it, or learn/ understand how to do it? - these days
there are many functions (black boxes) available to do things for you (some in GLM). (there are different levels at which to think about the problem).
(5) tell us why moving in pixels important to you?
Calculating object translations can be done in any coordinate system in theory - in practice there may be a few ways. In general the simplest and most efficient convention is to translate an object in world space by a 3D vector in the world space (use case permitting).
thank you all for taking your time to reply to my post.
I will try to explain it better if i can:
If i have a 32 by 32 pixel texture that i need to apply to a square, it is easier to work with a unit where 1 represents 1 pixel on the screen because the texture would otherwise not display correctly (it will either be smaller or larger depending on how you define a unit). I want to be able to move a square that will hold a 32x32 texture by glm translate, but i don’t want to use it until i know how to move an object in world coordinates because it is easier to work with than using NDC. If i know how to convert from world coordinates (e.g 100 pixels right, 100 pixels up, and -100 pixels away from the viewer) to NDC correctly then i can move on to learning more about other parts of opengl. I would like to learn how to calculate the conversion to NDC so i can achieve something similar to what game engines do. If anybody has encountered this problem before and know how to solve it i could use your help…
What i would also like to know is what is the size from z=-1 to z=1 in screen coordinates or world coordinates? I know for x = -1(left) to 1(right) = -(windowWidth/2) to (windowWidth/2), and for y = -1(bottom) to 1(top) = -(windowHeight/2) to (windowHeight/2). The reason why i think it’s like what i described is because of how i set glViewport(0,0,windowWidth,windowHeight).
you appear to be conflating 2 separate problems. let’s break it down:
(1)Your statement suggests working with a unit where 1 represents 1 pixel on the screen to ensure the texture displays correctly. This is not necessary because texture mapping and unit sizes are independent processes. The size of the unit in world coordinates does not need to correspond to screen pixels. Correct UV mapping ensures the texture fits regardless of unit size.
(2) 3D world coordinates are measured in a chosen unit system (e.g., meters, centimeters) and are not tied to screen pixels. Screen coordinates (pixels) come into play only after all transformations and when rendering the final image.
(3)Moving objects in world coordinates using glm::translate is standard practice and independent of how textures are mapped.
(4)Proper UV mapping ensures the 32x32 pixel texture fits the square correctly. UV coordinates range from 0 to 1 and are used to map the texture to the object.
(5)The size in world coordinates from z= -1 to z = 1 depends on the near and far planes defined in the projection matrix. For instance, z=-1 (near plane) to z=1 (far plane) may correspond to near (e.g., 0.1) to far (e.g., 1000) in world coordinates if they were the near/far values used in creating the projection matrix.
So give it a bit more thought. I suggest you break your question down and post 2 separate questions e.g. (1) how to do uv mapping of the square (2) How to translate the square in world space.(3) or maybe you want calculate how many world space units one pixel represents at any given time?
Another option would be to rephrase the question. For example it is still not clear what the “something similar” type of effect is, when you state “something similar to what game engines do” . e.g. are you texturing a quad with an tree image that you can place somewhere in a 3D landscape perhaps. We do not know. It might help to tell us. Answering some of the other previous questions asked wouldn’t hurt either.
I just meant in game engines you can move an object using values like x=100 to move something like 100 units to the right, y=100 to move to the top by 100 units, z=-100 to move -100 units away from the viewer; these values are not supplied using NDC, because moving 100.0f units using normalised device coordinates would move much further than it would using another coordinate system. I wanted to figure out how to achieve something similar.