Major slowdowns with fairly simple models

I have been trying to setup a system to import .3ds files, and draw them in OpenGL. At the moment it can sucessfully import the verticies and faces, and draw them, but when I load the application with them in the FPS drops to about 3. When I load it without models in it’s around 500 (based on number of passes through the draw function per second, not actual FPS on the monitor).

I have enabled depth testing and backface culling, but these only brought the fps up from 3 to 5. In wireframe mode it gets to about 8. When I rotate the view so I am not looking at the model, it goes right back up to 120ish, so the problem would appear to lie with OpenGL’s rendering.

Any suggestions on fixing this would be greatly appreciated, I have been searching on google for 2 days now, but there does not seem to be anything relating to this problem anywhere :S Perhaps I am just searching for the wrong things.

This is the first time I have tried programming in OpenGL, so even if you think it’s obvious, do mention it as I might not know about it :slight_smile:

I am using MS Visual Studio 2003, and coding in VC++. I am using Windows XP, and my GFX card is an ATI Radeon 9800.

I know OpenGL is more than capable of doing what I want, as there are games such as Spring which load a LOT of high(ish) poly models… typical that nothing ever works right when you try it yourself >.<

If the model is simple and not textured, then you can get few FPS with software rendering. Try calling glGetString(GL_RENDERER) or glGetString(GL_VENDOR). If i’m right, then you will see, that you are using Microsoft’s software renderer.
To fix this you must do two things:

  1. Have proper driver installed - I assume you’ve done that
  2. Select pixel format that doesn’t have PFD_GENERIC_FORMAT in it’s flags.

You can do #2 by using DescribePixelFormat and find pixel format yourself instead of using ChoosePixelFormat. Or perhaps you’re using GLUT, SDL or other library? In that case see what that library’s documentation say about initializing OpenGL context - perhaps you need to specify some other set of flags to get hardware-accelerated pixel format.

I currently have this pixel format descriptor in my code (taken from a tutorial on NeHe.gamedev.net), what should I change on it to increase the FPS?

I will check which renderer it shows when I get home, am at university at the moment and don’t have the code with me.

static	PIXELFORMATDESCRIPTOR pfd=
	{
		sizeof(PIXELFORMATDESCRIPTOR),
		1,
		PFD_DRAW_TO_WINDOW |
		PFD_SUPPORT_OPENGL |
		PFD_DOUBLEBUFFER,
		PFD_TYPE_RGBA,
		bits,
		0, 0, 0, 0, 0, 0,
		0,
		0,
		0,
		0, 0, 0, 0,
		16,
		0,
		0,
		PFD_MAIN_PLANE,
		0,
		0, 0, 0	
	};

Well, you could try to add PFD_GENERIC_ACCELERATED to flags, but I’m not sure if ChoosePixelFormat really cares about that flag.

The only sure way is what I mentioned - instead of using ChoosePixelFormat use DescribePixelFormat to list all formats.
Make a loop that displays (to console, to file or somewhere else) all pixel format information you can find with DescribePixelFormat.
Then, add code that will ignore formats that do not meet your minimum requirements:
-must have these flags set:
PFD_DRAW_TO_WINDOW
PFD_SUPPORT_OPENGL
PFD_DOUBLEBUFFER
PFD_TYPE_RGBA
-must not have these flags set:
PFD_GENERIC_FORMAT
PFD_TYPE_COLORINDEX
-must have at least 16 depth bits
-must have at leats 16 bpp
-must have at least 8-bit stencil (if you want stencil)

If you call DescribePixelFormat in a loop, then you will probably find many formats that meet these requirements - you may simply choose first in the list, or go one step further and search for formats that have the most bpp, depth, stencil, have better properties.
You may, for example find identical formats, but one will have PFD_SWAP_COPY flag set and the other will have PFD_SWAP_EXCHANGE flag set - the second one would be better.

You can also look at this topic for more hints:
http://www.opengl.org/cgi-bin/ubb/ultimatebb.cgi?ubb=get_topic&f=2&t=020825#000002