You could simply compute the texture coordinates out of the vertexpositions and normals and blend the different textur layers into each other depending on the normal. The blending for example is described in nvidias cascades demo secrets. The texturemapping itself is simple math. I’m doing it with a vertexshader if you want I can post it.
Well if you simply want to map your object from 6 sides with some textures, and your not a artist, there is no need for taking images. It is enough if you compute 3 different sets of Texture coordinates (for each axis 1) and pass them down to the shader or fixed function pipeline.
I try to explain what I must do.
I must implement a “box” texturing scheme.
For example in my application I have a model, and I make six rendering of this model (front, back, top, bottom, left, right) with different colors or with different light…
I report a paper description: We have implemented two schemes for representing the surface texture data. The
first is a “box” scheme that represents the surface textures using 6 orthographic (nonperspective)
“canonical” views of the model (top view, front view etc).
Then I would texture my model with this images, the paper say: For each triangle
we texture map using the “best” view for that triangle. This is the canonical view with
the highest texture resolution out of all views in which the triangle is completely visible
(no occlusion). The benefits of this approach are that it is very simple and the 2D texture
coordinates can be calculated implicitly from the 3D vertex data.
I hope to being more clear!
Thats exactly what I’m talking about. They compute the 2d texture coordinates out of the 3D vertex data. Depending on the normal they are choosing from wich of the 6 sides to map. That doesn’t mean you have to render anything, its just the way it’s explained so that you can understand it better.
If the normal is facying mostly into the x+ direction (x part of the normal is the largest) they map it along the yz-plane for example
vec2 texcoords = Vertex.yz
If it fits mostly on the y+ direction you would use
vec2 texcoords = Vertex.xz
and so on. You don’t need to render anything, you can compute everyhting with maths.
But I don’t have any textures!
The paper assume that we have a 3D model and a set of images of this 3D model. This images is used for texturing.
But now, I’m simulating that! I have a 3D model, for example the classic Utah Teapot, I must make some virtual “photo” of this model with, for example, differnet color or light.
Considering the amount of info you gave here I believe Ingrater was doing a fine job on helping you. So maybe you want to be a bit more specific and provide more details about what you’re trying to do.
Correct me if I’m wrong, but from what I read you have an untextured teapot, you want to take six snapshots of this model, store those in textures, and after that you want to texture the model with these textures again!?
Exactly! Six snapshots with differences (color, light, …).
Then, I want to texture the model with these textures.
With this, I obtain a textured model with some problems, for example seams…
The goal is process the textures to avoid those problems.
Sorry, I hope to be more clear :o!
Note that I ignored the shadow mapping, that’s left as an exercise The weights are distributed so that the sum of the three weights equals 1. If a fragment is occluded in one of the three accessed cube sides, you probably want to distribute the weights over the remaining non-occluded sides.
Because, for example, when you are processing a fragment of which the normal is facing in the positive x-direction, it is possible that there is another part of the object between the 3D position of that fragment and the cube side you are performing the lookup in. In this case, the fragment is occluded and the texture value from the positive x-side should not be applied
In another discussion, I was advised this method:
For each triangle of my model determine which of the six box planes this triangle is facing. Then project all points from the face onto my box plane. Where the points plane onto the orthogonal plane gives me the UV coordinate for that triangle.
It’s correct this method?
I’ve tried to implement that, but I’ve problems with the texture generation.
Yes, it’s basically the same. The problem with that approach is that if the triangle is not visible in the box plane its facing it will (or should not, anyway) get textured. So it’s best to use some weighted sum of the textures on the 3 box planes its facing , distributed according to their visibility of the triangle/fragment.
I’ve tried a simple example with the method say in my last post.
The model is a cube of unitary length, and is positioned at the center of the scene (0,0,0).
Using orthographic projection with this parameters: (the windows dimension is 512x512)
glOrtho(-1.0, 1.0, -1.0, 1.0, -1.0, 1.0);
I obtain this screen:
Then I’ve projected each vertex and I’ve used the new vertex as texture coordinate. For example:
I obtain this:
The square color is wrong (not red). I have added to the first image, eight green points.
I have added to the first image, eight green points. Four at the square corners, and four at the images corners.
The mapping is wrong…
Where is the problem? In texture generation or in texture mapping?
This simple example is only for one face of the cube!
I’ve solved the past problem in this manner: I project each vertex and then if this new vertex is less then zero I set them to zero and if the vertex is greater then one I set them to one.
Probably is wrong!
I’ve tried with a new model, a cylinder of lenght, width and depth two.
My windows is 512x512.
This is the correct rendering of the model, no texture, only light.
This is the texture created using the orthographic projection set to (-1.0, 1.0, -1.0, 1.0, -1.0, 1.0).
And this is a model textured, using the precedent image.
It’s wrong… :(
Where is the problem? Or… where are the problems?
Then draw a cube of unitary length, positioned at the center of the scene (0,0,0).
You’ve already done that to generate the orthographic texture.
Now… your view frustum ranges from -1 to 1 in all three directions because of your ortho call, but texture coordinates range from 0 to 1. So the texture coordinates for a vertex (Vx,Vy,Vz) becomes (0.5Vx+0.5,0.5Vy+0.5) if you are projecting parallel to the Z-axis.
Ok, now I’ve a cube of unitary length, positioned at the centre of the scene.
The projection is set to glOrtho(-1.0, 1.0, -1.0, 1.0, -1.0, 1.0)
This is the texture:
The cube textured is the same image of the precedent.
The correct rendering, whitout texture is this:
So, in the textured cube are included the black border of the texture.
Is a texture generation problem or a texture mapping problem?