Box texture mapping

Hi!
Anyone know a “box” texture mapping representation?
If I have a model, how can I implement this method?
For example:

  1. I make six images of the model (front, back, left, right, top, bottom). This image must made in orthographic mode?
  2. With this 6 images, how can I implement the box texture mapping?
    Thanks!

You could simply compute the texture coordinates out of the vertexpositions and normals and blend the different textur layers into each other depending on the normal. The blending for example is described in nvidias cascades demo secrets. The texturemapping itself is simple math. I’m doing it with a vertexshader if you want I can post it.

One moment :wink:
First, I must make six images (renering) of my model (front, back, top, bottom, left, right). Those images must be taken in orthographic mode, it’s correct?
Thanks!

Well if you simply want to map your object from 6 sides with some textures, and your not a artist, there is no need for taking images. It is enough if you compute 3 different sets of Texture coordinates (for each axis 1) and pass them down to the shader or fixed function pipeline.

I’m confused.
I try to explain what I must do.
I must implement a “box” texturing scheme.
For example in my application I have a model, and I make six rendering of this model (front, back, top, bottom, left, right) with different colors or with different light…
I report a paper description:
We have implemented two schemes for representing the surface texture data. The
first is a “box” scheme that represents the surface textures using 6 orthographic (nonperspective)
“canonical” views of the model (top view, front view etc).

Then I would texture my model with this images, the paper say:
For each triangle
we texture map using the “best” view for that triangle. This is the canonical view with
the highest texture resolution out of all views in which the triangle is completely visible
(no occlusion). The benefits of this approach are that it is very simple and the 2D texture
coordinates can be calculated implicitly from the 3D vertex data.

I hope to being more clear!
Thanks!

Thats exactly what I’m talking about. They compute the 2d texture coordinates out of the 3D vertex data. Depending on the normal they are choosing from wich of the 6 sides to map. That doesn’t mean you have to render anything, its just the way it’s explained so that you can understand it better.

If the normal is facying mostly into the x+ direction (x part of the normal is the largest) they map it along the yz-plane for example

vec2 texcoords = Vertex.yz

If it fits mostly on the y+ direction you would use

vec2 texcoords = Vertex.xz

and so on. You don’t need to render anything, you can compute everyhting with maths.

But I don’t have any textures!
The paper assume that we have a 3D model and a set of images of this 3D model. This images is used for texturing.
But now, I’m simulating that! I have a 3D model, for example the classic Utah Teapot, I must make some virtual “photo” of this model with, for example, differnet color or light.

Considering the amount of info you gave here I believe Ingrater was doing a fine job on helping you. So maybe you want to be a bit more specific and provide more details about what you’re trying to do.

Correct me if I’m wrong, but from what I read you have an untextured teapot, you want to take six snapshots of this model, store those in textures, and after that you want to texture the model with these textures again!?

Exactly! Six snapshots with differences (color, light, …).
Then, I want to texture the model with these textures.
Seems senseless!
With this, I obtain a textured model with some problems, for example seams…
The goal is process the textures to avoid those problems.
Sorry, I hope to be more clear :o!

The paper in question is this:
http://www.bmva.ac.uk/bmvc/2002/papers/49/full_49.pdf
And what I described is at “texture representation” section (1.3).
Thanks!

Well, that doesn’t look too complicated but you need to be familiar with cube maps, shaders and shadow mapping.

First of all, you need to create the images and depth maps for the six views and load them into cube maps.

To simplify things, I’m assuming your model is located in a cube within the ranges [-1,1]x[-1,1]x[-1,1] and the normals have unit length.

In the vertex shader, copy the vertex positions and normals to a varying.


varying vec4 myTexCoord;
varying vec3 myNormal;

void main()
{
    myTexCoord  = gl_Vertex;
    myNormal    = gl_Normal;
    gl_Position = gl_ModelViewProjectionMatrix*gl_Vertex
}

Next, in the fragment shader


varying vec4 myTexCoord;
varying vec3 myNormal;
samplerCube colorcube;

void main
{
    vec4 result;
    result = myNormal.x*myNormal.x*textureCube(colorcube,vec3(sign(myNormal.x),myTexCoord.y,myTexCoord.z);
    result+= myNormal.y*myNormal.y*textureCube(colorcube,vec3(myTexCoord.x,sign(myNormal.y),myTexCoord.z);
    result+= myNormal.z*myNormal.z*textureCube(colorcube,vec3(myTexCoord.x,myTexCoord.y,sign(myNormal.z));
    gl_FragColor = result;
}

Note that I ignored the shadow mapping, that’s left as an exercise :wink: The weights are distributed so that the sum of the three weights equals 1. If a fragment is occluded in one of the three accessed cube sides, you probably want to distribute the weights over the remaining non-occluded sides.

For the images I must use the orthographic projection?

Yes, for example I can try the first experiment with a simple cube model.

Why shadow mapping?
I’m learning the shaders, but I don’t understand very well the code.
With my six images, what I must do?
Thanks you very much!!

yes

Because, for example, when you are processing a fragment of which the normal is facing in the positive x-direction, it is possible that there is another part of the object between the 3D position of that fragment and the cube side you are performing the lookup in. In this case, the fragment is occluded and the texture value from the positive x-side should not be applied

load them into a cube map

Thanks!
In another discussion, I was advised this method:
For each triangle of my model determine which of the six box planes this triangle is facing. Then project all points from the face onto my box plane. Where the points plane onto the orthogonal plane gives me the UV coordinate for that triangle.
It’s correct this method?
I’ve tried to implement that, but I’ve problems with the texture generation.

Yes, it’s basically the same. The problem with that approach is that if the triangle is not visible in the box plane its facing it will (or should not, anyway) get textured. So it’s best to use some weighted sum of the textures on the 3 box planes its facing , distributed according to their visibility of the triangle/fragment.

I’ve tried a simple example with the method say in my last post.
The model is a cube of unitary length, and is positioned at the center of the scene (0,0,0).
Using orthographic projection with this parameters: (the windows dimension is 512x512)


glOrtho(-1.0, 1.0, -1.0, 1.0, -1.0, 1.0);

I obtain this screen:

Then I’ve projected each vertex and I’ve used the new vertex as texture coordinate. For example:


float v[2];

for(i = 0; i < numberOfTriangles; i++)
{
	glBegin(GL_TRIANGLES);
	for(j = 0; j < 3; j++)
	{
		v = projectVertices(mesh.getVertex(i, j));
		glTexCoord2f(v[0], v[1]);
		glVertex3fv(mesh.getVertex(i, j));
	}
	glEnd();
}

I obtain this:

The square color is wrong (not red). I have added to the first image, eight green points.
I have added to the first image, eight green points. Four at the square corners, and four at the images corners.
The mapping is wrong…
Where is the problem? In texture generation or in texture mapping?
This simple example is only for one face of the cube!
Thanks!

I’ve solved the past problem in this manner: I project each vertex and then if this new vertex is less then zero I set them to zero and if the vertex is greater then one I set them to one.
Probably is wrong!
I’ve tried with a new model, a cylinder of lenght, width and depth two.
My windows is 512x512.
This is the correct rendering of the model, no texture, only light.

This is the texture created using the orthographic projection set to (-1.0, 1.0, -1.0, 1.0, -1.0, 1.0).

It’s correct?
And this is a model textured, using the precedent image.

It’s wrong… :(:frowning:
Where is the problem? Or… where are the problems?
Thanks!

Okay, here’s an easy one:

Set your projection to

glOrtho(-1.0, 1.0, -1.0, 1.0, -1.0, 1.0);

Then draw a cube of unitary length, positioned at the center of the scene (0,0,0).

You’ve already done that to generate the orthographic texture.

Now… your view frustum ranges from -1 to 1 in all three directions because of your ortho call, but texture coordinates range from 0 to 1. So the texture coordinates for a vertex (Vx,Vy,Vz) becomes (0.5Vx+0.5,0.5Vy+0.5) if you are projecting parallel to the Z-axis.

Ok, now I’ve a cube of unitary length, positioned at the centre of the scene.
The projection is set to glOrtho(-1.0, 1.0, -1.0, 1.0, -1.0, 1.0)
This is the texture:

The cube textured is the same image of the precedent.
The correct rendering, whitout texture is this:

So, in the textured cube are included the black border of the texture.
Is a texture generation problem or a texture mapping problem?
Thanks!

What do you mean by “the correct rendering”? If I were to draw a cube with unitary length in an orthogonal view frustum with a side length of 2 I would expect it to cover only 1/4th of the screen…