Hello to you reader.
I have a little problem with how the things are displayed when i use a first person camera to walk around a world. I use Lightwave 6.0 to build the world. I am trying to use real dimensions for the objects in the world, but when i render it, it seems different. The camera seems not to project objects with normal dimesions. For example the it looks like the camera can’t walk through a door with a width of 1.5m.
Is there a way to make the camera look like human eyes? Is there anyine who had the same problem and solved it? Is there any special specifications for the camera’s angle of view? Please, if anyone know a solution to my problem i would appreciate it.
And forgive my terrible English.
First of all, what is the point in making the camera look like eyes, if you cant even see it? Second, the gluLookAt(…) takes 9 parameters; The first three are the x-y-z of the camera position, the second three are the x-y-z of the point that the camera is looking at, and the last three are always 0,1,0.
I do not think that he wants to make the cam look like am eye, he wants the FOV to be like that of a human eye. you need to look at how you set of the perjection matrix, if you did that manuly then the FOV can look odd.
Hope that helped.
Originally posted by JLawson: First of all, what is the point in making the camera look like eyes, if you cant even see it? Second, the gluLookAt(…) takes 9 parameters; The first three are the x-y-z of the camera position, the second three are the x-y-z of the point that the camera is looking at, and the last three are always 0,1,0.
The last is not always 0,1,0. It’s just a vector that states how the camera is tilted sideways. For example if you do like this:
The camera is situated on the worldposition 1,0,0 looks att worldposition 0,0,0 and is tilted sideways.
I am sorry for not answering back to your replies but i had an exam these days and i hadn’t have the time.
I think Validus has got the point.
The reshape func he suggests is the one i use. I think the problem is in gluPerspective and at 45 degrees (the first parameter). I don’t think that gluLookAt is something that will solve the problem.
I wanted to ask if there is a difference between gluPerspective() and glFrustum(). I haven’t found anywhere (on on-line tuts, not in books) how to use glFrustum. If anyone know how to use this func and if it has something to do with my problem please inform me.
The whole problem is, that the things in real dimensions doesn’t look normal in OpenGL. Apply to my first message, i have to tell that i use 1.8 unit (1 OGL unit = 1 m) for the camera’s height, a value the seems real for a person’s height.
Also, i found that the textures’ dimesions make an illusion of how far from the ground the camera is.
Thanks for your replies, and if there is someone who hadn’t understand the problem and want to, please fell free and e-mail me, and i will send him a demo of my world to check the problem himself.
Sorry for my terrible English (i have to say this because i don’t know them very good).
I cqn’t reply to your problem cause I don’t understand what exactly it is.
But I can try to help you understand the glfrustrum function (which is very simple actualy).
glFrustrum takes as parameters the dimensions of the so called frustrum box, ie the box, in camera’s reference, which is the space seen on the screen.
First, lets forget everything about zbuffer. So, the flat 2D window on wich objects are projected define an area in space delimited by the 4 planes which countain the camera and the each borders of the rectangle (the “screen”). This shape is an infinite pyramid. Every object that’s inside this pyramid will end in the screen, every point outside would be projected outside, thus won’t be visible (thus, won’t be projected).
Zbuffer forces us to delimit this pyramids so that its no longer infinite. We have a zmin and a zmax plane, so two planes cutting the pyramid so that its now a kind of box, growing with Zs. Its the frustrum box. everything inside will be projected, everything outside won’t.
glFrustrum takes the dimensions of this box (that is, the xmin, ymin of the colest plane, and the xmax, ymax of farthest plane -or the oposite, don’t remember), and the two Z coordinates of the two secant planes.
openGl then computes the projection matrixes with these parameters.
Check out the tutorials by Nate Robins. I find them very useful. They should help you understand a lot of key concepts like the difference (similarities) between glFrustum and gluPerspective. Here’s the link: http://www.xmission.com/~nate/tutors.html
Also, a big, big problem is when the viewport does not have the same aspect ratio as your viewing volume. This will really distort your images. Try using a square window and gluPerspective w/ aspect ratio of 1.
Thanks a lot guys.
I appreciate your replies.
The problem is simple and maybe i don’t tell it with the right way.
To understand the problem, make a room with dimensions in meters (real dims), place your camera 1.8 - 2.0 m above the ground (real human height) and make the camera’s step 0.8m for a simple key press (real human step). Also make the textures for the floor and the ceiling look real. For example if your texture displays four (4) floor tiles make it so it takes 0.8m from the dims of the floor (0.4m can be a real tile dimension).
You will notice everything look normal (in a general way). Now place a door to the outer space with a height of 2.1 - 2.5m and width of 0.6 - 0.8m. Render your scene and move close to the door.
Now things doesn’t looks real. It seems like the camera can’t fit the door’s width. The problem is that you can see MANY things to left and right. Imageine now yourself in front of that door. Can you see all of the things your scene renders from your camera point? I think the answer is NO.
That’s the problem guys.
My question is how can i make the camera renders thinks like it would be real human eyes.
[This message has been edited by HellRaiZer (edited 12-07-2001).]