Too slow

Sorry for posting as a new topic though it’s not new. But I need some attention!!!
The other description of my problem is in the title “SwapBuffers()…”.

After finding my code did not perform as advertised in graphics card I tried a new method. I did not draw anyting but rotated my scene and issued SwapBuffers(). Turned out it took almost the same time as the case when there are several millions of polygons.

So, what a waste of efforts to implement dispaly list and vertex array with strips.

Somethings wrong.

-KB

What does glGetString() say for GL_VENDOR, GL_VERSION, GL_EXTENSIONS?

What’s the PIXELFORMAT you selected?
Use DescribePixelFormat() to fill a pfd structure and look at all fields in the debugger,
especially the dwFlags. Is this the pixelformat you wanted?

I got the pixel format that I wanted.
PFD_DRAW_TO_WINDOW, PFD_SUPPORT_OPENGL, PFD_DOUBLE, PFD_DOUBLEBUFFER, PFD_TYPE_RGBA, 32bit color, 16 depth buf.

Also got, “NVIDIA Coorporation”, “1.5.3”, " …GL_ARB…" from glGetString().

What else to check?

So far, no matter what’s in the picture, it seems to take certain amount of time which is not acceptable.

-KB

hey not only you use 2 different accounts, you also post a new topic related to another one, without even providing the link .

my code did not perform as advertised in graphics card
ah…

it seems to take certain amount of time which is not acceptable
This is so much vague and inprecise that I am wondering what is your actual question.

Did you check vsync ? hint : wglSwapInterval(0) but you will have to query for this extension.
http://www.opengl.org/resources/faq/technical/extensions.htm

Or you may disable vsync directly from display control panel.

VSync worked.
The running time has been cut in half.
-Thanks.

Unrelated but you shouldn’t use 16 bit depth buffers if you’re interested in depth precision. Request 24 bits.