here’s the thing : I got my DrawCube function to work, but with 4 vertex per face, i have to extend a texture on the walls, which is uncool : i have a 128 * 128 brick sample texture that i want to be repeared on the whole wall, so i made a function that cut my face in hundred of face, and bind the coresponding texture part. it looks good, but it addssome thousand of vertex in my scene, so this doesn’t look very efficient…
although, if i want to throw some lighting in my scene, and if i put a projector light on part of a wall. If it’s only one face, the whole face will be light up right ? so again, is partitionning every face the only chance ???
thx for your answers
For your texture problem :
No, you don’t have to tesselate your face to repeat the texture. Set the texture wrapping mode to GL_REPEAT, and set a large tex coord at one end of the face (eg lower left corner to (0.0f, 0.0f), higher right corner to (10.0f,10.0f)).
For your lighting problem :
yes, tesselation is the only way to get decent results with standard OpenGL lighting
sound better with the real name : tesselation
(i’m kind of a newbie sorry) thx =)
ok i’m going to try this for the texture problem.
for the lighting one : how come there is no other way ? that means that in recent games (like splinter cell), the real time shadows and lighting (not the little tricks used here and there, but the players shadow for example) are mapped on tesselated faces ?
sorry for the late answer i was gone for a few days =)
Standard OpenGL lighting is done per-vertex. Each light’s contribution is calculated at each vertex (for which you supply a normal), and the resulting color is then interpolated across the surface. If a lighting event (eg a specular highlight) falls inside the surface, it will be missed because there is no vertex at this location to catch it. There is a good explanation of it here : http://www.opengl.org/developers/code/features/KilgardTechniques/oglpitfall/oglpitfall.html (section 2)
Newer games have custom sophisticated lighting models that essentially compute ligthing at the fragment level.
Also note that shadows are not linked to what is called lighting in OpenGL. OpenGL (like other polygon-based rendering libs such as D3D) have a local lighting model. This means that lighting is computed at each vertex without taking the rest of the world into account. There is no test to see if an occluder is between the light and the vertex at which the lighting computation is performed. So generating shadows is essentially a matter of using tricks that have little to do with lighting. It’s more like computing image-space masks to darken some areas in the final image.
Raytracers on the other hand have a global lighting model, which allows to generate “true” shadows. But for the moment, raytracing is generally too slow to be used in realtime graphics.
thanks for the explanation, it was quiet good and i got the point
lets hope that the Raytracing is going to make some improvement