Require immediate help

i am working on a game engine project but the major problem i am facing is that opengl is too slow. does any one know how to improve the speed of my program.
waiting anxiously

Too slow where? Did you profile your program?

when i run my program with more than 10 textures its speed start dying
but without textures it is okay
i want to know how can i improve my program?
and one more thing do u know how to create a window with resolution 320x240 using glut.
Is it possible or not?

I bet you’re already sorting your polys after textures. Also, if you a really large texture for a small poly, the texture memory accesses won’t be cached I think (well, that’s my incredible knowledge of hardware… )

Maybe you’re runnung out of texture memory??
All consumer-level cards will then drop to <10 fps. Try making the textures smaller. If the problem does not occur then you need more texture memory (or a professional card).


I should reread my posting before clicking “submit”.

What I was trying to say is that maybe you’re running out of video memory. Then some textures will be places in the system memory which causes a GREAT slowdown on consumer cards. (most of them transfer ALL textures to the video memory each frame in such a situation)


Originally posted by asim:
when i run my program with more than 10 textures its speed start dying
but without textures it is okay
i want to know how can i improve my program?
and one more thing do u know how to create a window with resolution 320x240 using glut.
Is it possible or not?

Well, since you asked about glut…I personally never use it so I wouldnt know for sure, but doesn’t glut only do software rendering (not hardware)? If so, then things like bilinear/trilinear filtering will slow it down. Not sure if mipmapping is a win or a lose with software rendering.

If it is running in hardware, then there are some things you can do to increase performance. Make textures smaller and/or use mipmapping. Use bilinear rather than trilinear filtering (or nearest would be even better, but that starts to look rather ugly). Sort your polygons by texture. In other word: draw everything that uses texture 1 first, then everything that uses texture 2, etc. If you arent sorting by texture, and you have more textures than can fit in video memory, you may be thrashing all of the textures several times per frame. Even if your textures all fit in memory, sorting by texture could still help.

Yes you can use glut to create a 320x400 window. Heck, you can make any size window. Look at the docs for glut. Glut does use hardware acceleration (if available), since glut itself is only a windows wrapper basically (well, a bit more).

Saying “too slow” doesn’t make sense, since you need much more info, and you should use a profiler to see WHERE it is slow, and if it is texture memory, get a 64MB card.

Where can one find such a profiler? And how does one implement it? Thanks!

Its always important to ask the trivial:
Are you using texture objects (glBindTexture2D)?
If you’re loading 10 textures each frame with glTexImage2D you will kill performace (unless textures are 32x32 or less).

Yep, I heard someday that Visual C++ 6 would have such a profiler. But I don’t know which program it is and also how I would have to use it.

Yes, VC++ 6.0 has a profiler. You can enable it from the project settings, and then select profile from the compile menu. It will run your program, and when you exit, it will tell you which functions used up the most CPU time.


I think only the pro (and higher) version of VC++ has the profiler. To check, if you can do optimizations, then you have the pro/dev version, and you can enable profiling in the link settings.

There also some 3rd party ones, I think codewarrior has some, and metaworks (I think that is the name) and a few free/share ware ones.

To tell you the truth, I’ve had quite a few problems with the VC++ profiler flaking out on me. Sometimes it works perfectly, but other times I get absolutely no timing info after I exit the app (and yes, I do have profiling enabled).

Personally, I prefer Intel VTune. Its really nice in that it times everything and gives you really nice graphs. You can see how much time was spent in your exe and how much in each dll that was loaded. Then you can look inside any of these and see where the time was spent internally. For your app or any DLL’s that you have debug info for, you can see where the time was spent function by function, class by class, or even instruction by instruction.

VTune also gives you really nice call graphs, so you can see which functions called which functions, and how many times each was called.

It also has a slew of options that you can adjust. You can time by clock ticks, or by cache read/writes/misses, and a whole bunch of stuff I dont even understand. I think the program has more options than most people could ever hope to understand, but if you can figure it all out, you can really optimize the hell out of your code.

By now it probably sound like I work for Intel or something.

Where can one find this VTune? And how easy is it to implement? Thanks for the info!

VTune stuff at Intel’s site here:

I was just playing with the evaluation version. Very nice. Now to convince my lab to buy a copy…

I prefer TrueTime by NuMega (the same guys who wrote BoundChecker and SoftIce). In my opinion its better than VTune, the UI is more intuitive and the program has more useful features. Dont know how much VTune costs but i believe TrueTime is more expensive (but worth it).


Do you know of any profilers available for Linux?