I’m using base primitives. Mostly triangle fans and strips. I’m not using any quadrics or anything fancy. In fact, I’m not even scaling, so I believe I’ve turned off GL_NORMALIZE.
The other thing is that I’m getting a strange effect with the light. On a cylindrical model, if I rotate the part about the y-axis (just turn the faces), the lighting spot moves to the point where it is out of sight. It implies to be that the light is moving, but it shouldn’t be. Ideas?
(Its not really as important as fast lighting though)
Apparently, the light position is in the object space of the cylinder, so it’s being transformed by the modelview.
Also, remember to update the spot direction (it is transformed by the inverse of the modelview matrix current at the time of calling glLightf).
[This message has been edited by paolom (edited 05-25-2000).]
Sure. my bug was using GL_EMISSION for my lights but not using PushAttributes, hence everything was set to GL_EMISSIONS. Killed my framerates. Also, using any glu functions, like quadrics, just killed performance. Creating my objects with GL_TRIANGLES has turned out to be always way better performance-wise than using any glu functions. Especially when you re-use vertices. Personnaly glu or glut functions are great when performance is not an issue, but as soon as it is, forget them. Create arrays to define your polygons, and reuse vertices as much as possible. I say this but of course am dying myself on framerates my object is a ferris wheel some 2500 triangles plus normals down to 30 fps have not started to texture map yet. Good luck.
Dunno if my last msg ‘took’ but playing around with lighting, material properties, etc. has extreme effects on framerate. Experiment! That is what makes it so cool and fun! Emphasis on fun!
Lemme know what you discover.
The perfromance of lighting calculations decreases with light “complexity”. Fastest is ambient, then directional, then point, then spot.
Multiple of them normally scale the performace on a card without geometry processor.
Avoid spotlights if you have no geometry accelerator.
15 fps with 500,000 polygons a frame is 7.5 MVertices/s (all polygons in strips!). No consumer hardware delivers that with lighting enabled. Maybe Geforce2, but that depends on the lighting conditions and triangle sizes.
Definitely there is no consumer PCI card that would deliver that.
At 70,000 vertices (not talking independent polygons here) and 15 fps you need a throughput of over 1 MVertices/s.
That sounds possible with some fast processor (>= Pentium III 500 MHz) and a board with a good OpenGL driver (means optimized for lighting calculations in your case) and decent fillrate.
Originally posted by Relic:
[b]Definitely there is no consumer PCI card that would deliver that.
At 70,000 vertices (not talking independent polygons here) and 15 fps you need a throughput of over 1 MVertices/s.
That sounds possible with some fast processor (>= Pentium III 500 MHz) and a board with a good OpenGL driver (means optimized for lighting calculations in your case) and decent fillrate.
[/b]
Hmm, thanks for the lowdown. I’m pushing 15fps on 70k polys though. And thats on a PCI card (granted, its a 3Dlabs Oxygen VX1). It actually runs better on the PCI then my AGP Voodoo3.
I haven’t done any real framerate calculations, but you’re right. Lighting makes a huge performance hit, a good 75% at least.
The question is now, how do they handle that in commercial 3D engines? I know poly counts are much lower, but the state of the art is beginning to push 10k to 20k polys with multiple light sources at very high frame rates (+30fps easy). How are they doing that?
PS - I know one trick that game vendors are using… high detail levels use minimal lighting, and well lit levels use low detail (poly count).
Most of the lighting in games is done by lightmapping, where you multipass/multitexture your surface to modulate the color of it… It’s cheap and fast.
What a lot of games do is do their own vertex lighting on the CPU for objects, and lightmap the level.