Problem with z-axis objects

Greetings all, still a newbie here and at OpenGL in general.

Anyways my question is about a series of objects (GL_POINTS in this case) that have different Z-axis depths. I am trying to make a simple starfield that you can go through and have the stars fly by you. I am able to draw everything, and every star has a different random depth, but unfortunately when I go into the starfield, it seems that only the X and Y axes are being manipulated, and not the Z-axis.

Here is my code:


void display(void)
{
	glClear(GL_COLOR_BUFFER_BIT);

	glBegin(GL_POINTS);
           glVertex3f(starX[i]-3,starY[i]-3,-starZ[i]);
	glEnd();

	glutSwapBuffers();
}

void keyboard(unsigned char key, int x, int y)
{
	switch(key)
	{       /* fly in */
		case 'f':
			glScalef(1.01,1.01,1.01);
			break;
                /* fly out */
		case 'F':
			glScalef(0.7,0.7,0.7);
			break;
                /* rotaters: */
		case 'w':
			glRotatef(1.0,1.0,0.0,0.0);
			break;
		case 's':
			glRotatef(-1.0,1.0,0.0,0.0);
			break;
		case 'a':
			glRotatef(-1.0,0.0,1.0,0.0);
			break;
		case 'd':
			glRotatef(1.0,0.0,1.0,0.0);
			break;
	}
	display();
}

As I said above, it looks like only the X and Y are changed and not the Z. The depth of the entire starfield is very deep, but it only takes a few seconds to travel through the entire thing (which it shouldn’t). I was under the impression that this would work, however apparently I am missing something here.
On another note, I have glOrtho set to:


glOrtho(-10.0,10.0,-10.0,10.0,-100.0,10000000000.0);

So I know that it should be able to display all of the stars no matter where I am at (so long as I did not do something totally wring in glOrtho and am just not seeing it).

If anyone has any ideas as to what is wrong here, the help will be gladly appreciated.

I don’t think you’ll want to use an orthographic projection for that. Try gluPerspective(…).

In orthographic projection the projection of your scene looks the same independent of the change in viewpoint perpendicular to the viewing direction (apart from clipping that is).

N.

I dont think the point size depends on the depth of your points.

Never do that:
glOrtho(-10.0,10.0,-10.0,10.0,-100.0,10000000000.0);

http://www.it.jcu.edu.au/Subjects/cp2060/resources/bluebook/glOrtho.html

Your arguments 5 and 5 are incorrect.

I suppose you mean 5 and 6, but I fail to see what’s incorrect about them.

The parameter 5 is the distance of the first clipping plane and the parameter 6 is the distance of the last one. So they must be positive.

Next, the difference between them must be as small as possible, otherwise the zbuffer precision will be very poor.

glOrtho(-10.0,10.0,-10.0,10.0,1.0,1024.0); // Is better.
But it depends on the depth of yours fragments.

I don’t think this is a requirement for orthographic projection, only for a perspective one.
In the perspective case you have an optical center of the camera, so the near plane and far plane are defined by the distance from the optical center perpendicular to the viewing direction. In orthographic projection however, the optical center is at infinity, so the near and far plane distances are only world/object-space related and can be negative.

N.

the difference between them must be as small as possible, otherwise the zbuffer precision will be very poor.

The far distance doesn’t matter so much – you can actually put it at infinity, and still get good Z resolution in the near-mid field. It’s the near distance that really matters. Also, it doesn’t matter for Ortho; you can use any positive or negative numbers for the depth range without any risk of dividing by 0.

However, the really large number is likely to suffer from quantization problems in Ortho mode, because the Z buffer is linear, not inverse proportional. Thus with a 24-bit depth buffer, 10000000000.0, each quanta in the depth buffer is almost 600 units in size. If you’re not enabling the Z buffer, that may not matter, but it’s something you should consider.

Also, the code doesn’t clear the depth buffer, and I don’t know whether it enables the depth buffer. Finally, if you want to have a “fly through” effect of the star field, you need a perspective projection; the ortho projection will not move the stars on the screen when you just change Z value of the camera (that’s what ortho means, basically).

No, they don’t, as -NiCo- already explained. It’s even explicitly mentioned in the link you posted: “Both near and far can be either positive or negative.”

Next, the difference between them must be as small as possible, otherwise the zbuffer precision will be very poor.

Precision is relative. What if your scene units are nanometers?

Xmas:
Yes “Precision” is relative, but the sentence “Next, the difference between them must be as small as possible, otherwise the zbuffer precision will be very poor.” is true in every cases.

Xmas:
You can’t use nanometers (1E-9) with floats, so if you’re using such a units, you’ll probably scale all your values.

I just gave an exemple, because a zfar value of 10000000000.0 shows that he missunderstood the meaning of this argument.

And you have no information about how small “as small as possible” in this case really is.

You can’t use nanometers (1E-9) with floats, so if you’re using such a units, you’ll probably scale all your values.

Single precision floats can represent a range of about [3.4e-38, 3.4e38], so what is the problem with interpreting them as nanometers? That doesn’t mean you need nanometer precision.

I just gave an exemple, because a zfar value of 10000000000.0 shows that he missunderstood the meaning of this argument.

No, but it shows that you didn’t really grasp the concept of relative precision. We have no idea whether 10e10 is a “large” or a “small” distance because we don’t know the scale of the scene.

Xmas:
If you’re using nanometers, you’ll probably need nanometer precision. Floats can’t be used in this case, because that will generate to much numerical errors.

“And you have no information about how small “as small as possible” in this case really is.”
=> Of course, that’s why I said, “as small as possible” because that depends on his scene.

“We have no idea whether 10e10 is a “large” or a “small” distance because we don’t know the scale of the scene.”
=> … You’re boring guy, because it’s easy to find a counter exemple. Moreover, in my post, I added “But it depends on the depth of yours fragments.”

It depends not only on the scale of the scene, but on the depth variation between polygons.

The conclusion is that (as mentioned Jwatte) he should just take care of that and to keep in mind he shouldn’t use small units (ex: meters) with floats to store distance between stars.

The conclusion is that you were wrong in saying that the near and far distance both need to be positive, and you have no proof at all that the [1, 1024] range you suggested “is better”, as you claim, than the range used by Talyn, for his purposes. There’s nothing more to say.

Yes, I were wrong because at the moment, I couldn’t figure out what a negative value for the znear argument could mean. But the reference manual explicitely precised it’s correct with glOrtho only. But a negative znear is quite strange, because you’ll be able to see objects which are behind the camera. Am I wrong ?

“you have no proof at all that the [1, 1024] range you suggested “is better””
=> Of course I don’t, but if the zfar value (1000…00.0) is correct, I would have warn him for nothing. In the opposite, he won’t be surprised if he has problem with the zbuffer.