Speed of the rendering

I have never been fortunate to use an OpenGL accelerated card, but based on what I’ve read you have to be careful that you are not doing something to put you in software mode. 4000 tris should not render so slowly. So it might be good to look at your pixel format and whatever is enabled to see if it makes a difference. Someone with a GeForce 2 or Radeon could probably do a generic test with 2000 textured tris and see what their frame rate is like.

I’m been thinking that is it possible to do simply a LoD that decreaces the polygons from brushes when they go further away from the camera?
If the engine would somehow take directly polygons off from the brush and snap the resulted non-linked verts or something like that.

BTW, does mipmaps affect like some kind of LoD? Like, more further away the textured polygon goes, the smaller the texture goes, affecting to fill rate.
If mipmapping doesn’t do that, could that in someway be implemented easily?

[This message has been edited by blender (edited 04-12-2002).]

Furrage, the slowness might be becouse of multipass lightmapping WITEHOUT vertex arrays.

I’m going to make my engine use EXT_compiled_vertex_array, and sort the brushes by surfaces (solid, blended etc.).

Originally posted by zeckensack:

Deissum: I’m nitpicking here, but aren’t normals transformed by the inverse modelview? That would mean larger scaling factors would make the normals smaller. This fits with my experience, ie scaling >1 produces darker lighting results.

Yup. You’re right. Normals are mutliplied by the inverse matrix, so in my example the magnitude of the normals would be halved.

Originally posted by Michael Steinberg:
Deissum, yes, in the case where opengl transforms the normals as well. As i understood the mathematics, rotations dont rescale the normals, as uniform scales dont need to. So, in the case where we have a modelview-matrix with translation, rotation and uniform scaling, we wouldnt even have to work on the normals. Hmmmm. But from what you said i think opengl does it anyways.

Normals are always multiplied by the inverse of the model view matrix. In the case of rotations and translations, the magnitude isn’t affected.

If you take rotations as an example… they have to be transformed by the matrix. Say you have a simple quad facing the screen, normals pointing towards you… now you rotate the quad 180 degrees… if the normals weren’t affected by the matrix in this case, they’d still be pointing at you and the lighting wouldn’t have changed.

Does anyone know how to adjust mipmapping?
I mean that how can I set the distance where the texture changes into smaller one.

A recommendation I have found useful is the use of average normals… Good stuff

And when using LOD, I recommend the use of techniques such a ROAM or Quad(used by tribes).

Both of these enhance the performance abruptly…

ROAM or Quads implmementations may become a Max Pain, but hey, once they are done you live la vida loca. There are also LGPL implementations of both of these techniques…

For average normals? Simply take all polygons that share a single vertex and only use that one in the glNormal()
It takes the overhead of computing normals the way you would do, but with an extra step of determinig if there are polygons with a common vertex.

Deissum, yes, everything i meant applied to length. Translation and rotation do not affect length. Uniform scaling need not to affect length. However, OpenGL applies the general case.

Here is some of my old code:

for(n=0; n<NumFaces; n++)
{
pFace = &Face[n];
pTextr = &Texture[pFace->Id];
pLmap = &Lightmap[pFace->Lid];

glDisable(GL_ALPHA_TEST);
glDisable(GL_BLEND);
glBindTexture(GL_TEXTURE_2D, *pTextr->Texture.iData);

glBegin(GL_TRIANGLE_FAN);
for(m=pFace->Start; m<pFace->Start+pFace->Num; m++)
{
  glTexCoord2f(Verts[m].Tv.x, Verts[m].Tv.y);
  glVertex3f(Verts[m].Pos.x, Verts[m].Pos.y, Verts[m].Pos.z);
}
glEnd();

glEnable(GL_ALPHA_TEST);
glEnable(GL_BLEND);
glBindTexture(GL_TEXTURE_2D, *pLmap->Texture.iData);

glBegin(GL_TRIANGLE_FAN);
for(m=pFace->Start; m<pFace->Start+pFace->Num; m++)
{
  glTexCoord2f(Verts[m].Lv.x, Verts[m].Lv.y);
  glVertex3f(Verts[m].Pos.x, Verts[m].Pos.y, Verts[m].Pos.z);
}
glEnd();

Then the new using vertex arrays:

glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);

for(n=0; n<NumFaces; n++)
{
pFace = &Face[n];
pTextr = &Texture[pFace->Id];
pLmap = &Lightmap[pFace->Lid];

glVertexPointer(3, GL_FLOAT, 0, Brushes.Vert_Array);
glTexCoordPointer(2, GL_FLOAT, 0, Brushes.TCoord_Array);
glLockArraysEXT(0, NumVerts);

glBindTexture(GL_TEXTURE_2D, *pTextr->Texture.iData);
glDrawArrays(GL_TRIANGLE_FAN, pFace->Start, pFace->Start+pFace->Num);

glUnlockArraysEXT();

}

glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);

It takes a while, and the it crashes.
At start-up I have put all the vertex and
texture coordinate information into the arrays, but havent yet sorted them so the coordinates would apply to the vertexs.
Still why does it crash? I havent had much time to get into using vertex arrays byt sill…

Well…

I won’t go into OpenGL optimisation because so many people know this stuff better than I do here. However, when optimising, you should really really really get your hands on a “profiler”. A profiler will tell you where most of the time is spent executing your code. It’s really great because quite often, most of the time is not used by the graphic card but by your own code. In any case, just look for the biggest time spender in the profiler and optimise that one as much as you can, often until it is no longer the biggest source slowdown. Then mode on to the one who’s now the biggest slowdown and so on. I cant stress this enough, don’t get into modifying your software blindly optimising anything/everything.

You can find a free profiler at AMDs website. It profiles programs compiled with VC++ 6 or .net and runs only on AMD processors. Probably some free compilers for GNU/Lnux, working on gcc-compiled programs exist, too.

I’m not that much of an OpenGL expert but here is something that might be wrong:

I’ve noticed that by using display lists you loose ALOT of speed. I don’t know why, as display lists should in theory be faster. I have a Gforce2MX 200 64MB and without display lsits I can render about 300000 filtered-textured, no backface culled, non trianglestripped polys and get about 5 FPS. I haven’t optimised nothing and i still have GL_NORMALIZE enabled. So this is tottally unoptimised. Running the same thing from a display list i get 0.5 FRS; (with GL_COMPILE not GL_COMPILE_AND_EXECUTE)
Why, i can’t say?
If someone can…

Are you re-building the display list every frame?

Can I use ARB_multitexture with vertex arrays? What I mean, is can I give two texture array ponters?