Placing text in 3D world.

I am working on a turn based strategy game and I have this problem since almost the very beginning. I can’t find it discussed anywhere therefore this post:

I’ve asked a fellow programmer about this already, let me quote the message I sent to him.

He said that he thinks that orthographic projection is the answer here. Unfortunately glOrtho(), which I’ve been using only to display items that should lay “flat on the screen” (menus, etc.), destroys the coordinate system that was set by gluPerspective().
For example: I have a star that is placed in 3D world using the gluPerspective(). I want to display the name of that star right below it. I set the GL coordinates of text with function that looks like this:
text->setTranslationPosition(starGLcoords.x, starGLcoords.y - 0.02f, starGLcoords.z);
When this is drawn with the same perspective as the star it works ok, minus the text scaling problem when camera is zooming out. But it obviously will go wrong when I change to glOrtho(). So, is there any OpenGL mechanism that I can use to position this text correctly or do I have to calculate the proper coordinates manually?

BTW: The camera is fixed to look always straight forward so there’s no need to “billboard” the text to make sure it’s always facing the camera.

well hes right, if you want to get rid of the zooming problem you have to use glOrtho (or call glLoadIdentity in each matrix mode).
But to get the position right you have to compute it yourself using the modelview and projection matrices, it shouldn’t be that hard.

Transform the 3D text position to 2D using your current modelview and projection matrix, then as zoverlord says, use glOrtho to set an orthographic projection and draw the text over the 3D rendering as if it where merelly 2D.

I have a method that looks like this:


Coord Video::translateGLToWindowCoordinates(GLdouble x, GLdouble y, GLdouble z) const
{
	GLdouble modelview[16];
	GLdouble projection[16];
	GLint viewport[4];

	glGetDoublev(GL_MODELVIEW_MATRIX, modelview);
	glGetDoublev(GL_PROJECTION_MATRIX, projection);
	glGetIntegerv(GL_VIEWPORT, viewport);

	Coord outputCoord = { 0.0, 0.0, 0.0 };

	gluProject(x, y, z, modelview, projection, viewport, &outputCoord.x, &outputCoord.y, &outputCoord.z);
	outputCoord.y = viewport[3] - outputCoord.y;

	return outputCoord;
}

I suppose this is what you are talking about.

Also while we’re at it: What is the proper way to place the items inside the 3D world? I mean:
Right now my game is using two sets of coordinates for each object. “Logical” set which is used inside the game logic (to calculate distances or turns to complete the move between the stars, etc.) and graphical coordinates whose sole purpose is to perform OpenGL translations. I’m placing the items by specifying their logical coordinates and the graphical coords are being calculated by dividing the logical positions by 500.0…
This gives fairly good distances between the objects and is not affected by resolution change.

Long time ago I was using the reverse vesion of the method presented above but I stopped that when I noticed that item placement (visibly) depends on aspect ratio and resolution.

Veni vidi vici!
For future generations:

I’ve modified the method posted above, to be exact single line was removed:


Coord Video::translateGLToWindowCoordinates(GLdouble x, GLdouble y, GLdouble z)
{
	GLdouble modelview[16];
	GLdouble projection[16];
	GLint viewport[4];

	glGetDoublev(GL_MODELVIEW_MATRIX, modelview);
	glGetDoublev(GL_PROJECTION_MATRIX, projection);
	glGetIntegerv(GL_VIEWPORT, viewport);

	Coord outputCoord = { 0.0, 0.0, 0.0 };

	gluProject(x, y, z, modelview, projection, viewport, &outputCoord.x, &outputCoord.y, &outputCoord.z);

	return outputCoord;
}

Each time 3D scene is drawn I transform coordinates for each screen object. It has to be done during the drawing phase because properly set perspective is required. So, for each stored object a method is called:


void Blender::transformCoordinates(ScreenObject* object)
{
	object->transPos = Video::translateGLToWindowCoordinates(object->screenCoordinates.x, object->screenCoordinates.y, object->screenCoordinates.z);
}

Legend:
object->screenCoordinates is a set of GLdouble (X, Y, Z) that stores the 3D position of the text.
object->transPos is of the same type as screenCoordinates. These are the coordinates that are directly passed to glTranslatef().

Works like a charm, provided that you setup gluOrtho2D() with parameters that are equal to the viewport size (read: screen resolution).

Obviously it can be optimized by calculating the coordinates only when camera or object change position - gluPerspective() needs to be entered before, still - but it works quite OK in it’s current form.

I hope I’m making sense, heh :slight_smile:

My other question from the post above remains, though.

If your using any type of perspective projection then you only need one coordinate system, the so called worldspace.
It can be at any scale (as size and positions are really only relative to each other and not the screen), personally i prefer to use SI or 1.0f=1 meter but if we are talking about stars then maybe lightyears or AU are better.
Now the only thing you have to keep in mind is that everything related to scale and position within and around the world space needs to be set with this scale in mind, like z-near and z-far or else things will look bad.

I think I know what you’re getting at. However I’m still wondering how other games do it.

Let’s take Doom for example, or rather Doom ports like GZDoom or Skulltag, which both use the same OpenGL renderer by the way. When you open the map editor it is clearly stated that all coordinates (even the angles for things) are in integers. Moreover one integer unit is one pixel of the texture.

For example if you have texture that’s 128x128 and you make a wall that’s 128 units long and high the texture will fit the wall perfectly. I’m not talking about how the texture is applied because, in this case, it’s obvious how to setup texture coords properly in OpenGL but I wonder how they translated the 128 integer units to OpenGL coordinates AND (important stuff here :stuck_out_tongue: ) kept the perspective exactly the same as the one created by software renderer, which both ports still support.

the integer units are exactly like in floating point but without the .0 in the end, it’s pretty easy to convert, in this case you just set the zfar/znear to match.
The Doom ports uses that coordinate system because the old doom did, i think carmack once said that the coordinate system is in inches, but thats just and afterthought as the original rendering method is based on raytesting and clever texture scaling, thus everything works better if the raytest result is done with integers.

But all that is just plainly the choice of the developer