How to get rid og garbage?

Usually, especialy with 200+ vert objects, my program slows a lot. I have GF2GTS & PII overclocked from 350 to 434. But, that’s not the point. I have culling enabled, but ther’s a lot of triangles I don’t see anyway. Ok, I know I could use BSP’s, but I’d like to know is there reliable way how to check this stuff at vertex programs & is it worth that. Why gms like Hitmen 2 flys on my PC, but my own, “simple” program, with lists, cullings is slow as snail (36-4)fps, depending on objects I’m trying to import?

BTW, maybe someone knows how to enable wireframe mode in games like Half-Life etc.
& how many triangles “normal” importable object should have?
Just qurious

BTW, maybe someone knows how to enable wireframe mode in games like Half-Life etc.
Just qurious [/b]

If you have a nice debugger, launch the game in it(some games don’t like being attached to debuggers though). When you think/know the game has a valid context/window, activate the debugger and “make” a call to glPolygonMode in the gl dll. This would probably be your last resort though, since it could crash the game pretty fast if you mess up

Are you sure that you’re getting hardware mode?

Check the GL_RENDERER and GL_VENDOR strings. If it’s the Microsoft generic implementation, you’re running in software.

Just look for Half Life console commands using Google.
Then launch Half Life with this added to the shortcut:-

Then type in the wireframe command (don’t know what it is off hand).

Yep, you must be doing something wrong.
On my GF2MX + PII300 I can display 3 quake 3 levels (with average of 10000+ verts) at about 30 fps without any culling just by putting them all into display lists…
It’s probably what jwatte said but I thought you’re someone who is already done with this kind of problems

What? Are you saying you’re only getting 30fps with 30,000 polys on a gf2, mickeymouse? That’s only 900,000 tris per second…

[This message has been edited by knackered (edited 12-21-2002).]

MX is considerably slower than GTS, especially about fillrate. I can get significantly more than that, when I move my viewing direction so that it doesn’t suffer from fillrate issues.
Also it can’t be compared to highest nVidia benchmark’s results as I’m doing quite many texture changes between (as few as possible however).

So you have a lot of overdraw? It may be worth while trying to reduce your overdraw. The fillrate of the gf2mx isn’t that low…

I checked vendor & other strings - shows Nvidia corp. GF2GTS/AGP, just what I thought. Would be funny if it showed somthing else But the problem still presists. Maybe multitexturing or CG is the problem although I doubt. I could send my code if someone have time to look at it. It’s not long , but has CG 2 beta, havn’t jumped to final yet. BTW, when I turn wireframe on, I have additional FPS drop

Moved to CG final, nothing changed.

CG is entirely emulated in software if you’re using a GeForce 2.

I e, if you’re using a vertex program in CG, then you’ll get software vertex processing.

[This message has been edited by jwatte (edited 12-27-2002).]

>>The fillrate of the gf2mx isn’t that low<<

its about 700 million texels a second,
which IIRC aint much better than a tnt2!
though in verts a second there is no comparisoon between the two

I doubt about vertexprocessing, coz all those demos wouldn’t run without NVemu then. And on TNT 2 they wont run at all, because of NV_vertex_program lack (even with emu). BTW, GeForce2 GTS HAZ T&L engine, prove me wrong, but that’s all the vertexprocessing stuff
The program was slow as snail before CG too