problem while using nvidia card

My program is loading a small model(.ms3d)in a GL window. I am using VC++ 6 to wright my code. With on board graphics card i am getting a good frame rate, but when i am installing the Quadro FX 1300 card on my machine along with the proper diver i am getting a less frame rate. I am not able find out the problem. is it anything i have add in my code to utilized the graphics card, or is it something other , please help

Perhaps there’s something unsupported about the framebuffer format or other features you’re using forcing you to fall back to software.

After you makecurrent the context what does the vendor string you querry say?

Have you tried enumerating pixel formats?

Dear dorbie,
Thanks for your reply. I am sending the pixel format structure i have used in my program. Please go through it.

32, //Color Depth
0, 0, 0, 0, 0, 0, // Color Bits Ignored
0, // No Alpha Buffer
0, // Shift Bit Ignored
0, // No Accumulation Buffer
0, 0, 0, 0, // Accumulation Bits Ignored
16, // 16Bit Z-Buffer (Depth Buffer)
0, // No Stencil Buffer
0, // No Auxiliary Buffer
PFD_MAIN_PLANE, // Main Drawing Layer
0, // Reserved
0, 0, 0 // Layer Masks Ignored

I unfortunately didn’t understand the point try to make by the line “After you makecurrent the context what does the vendor string you querry say?”, so please write it little more.

I am sending you the code how i attach the device context to rendering context.

(declared global)

In the window creating section
hDC=GetDC(hWnd); //hWnd - handle to the newly
//created window
GLuint PixelFormat=ChoosePixelFormat(hDC,&pfd);

Waiting for your valuable comments.
With warm regard

What’s your onboard graphic card ? Which one is more powerful ?

Also try to answer what Dorbie said to you, otherwise we’ll only be able to make assumptions.

What does glGetString (GL_VENDOR) returns ? (that what he asked)

And if I’m not wrong 16 bits z-buffers are out of date.

Might be 16 bit z combined 32 bit RGBA color forcing you to software fallback.

Ask for 1 bit z, it’s secret sauce for requesting the largest supported zbuffer.

The better way is to enumerate all pixel formats and querry the parameters and pick one you like from the list.

Get the vendor string and see what it says for the visual you are getting.

I wrote a very simple program for Windows to query OpenGL info.
It writes useful info and the details about all supported modes into a file, glinfo.txt.

Please refer to “setPixelFormat()” and “findPixelFormat()” in my source code.

findPixelFormat() will look through all supported modes and accumulate the scores based on the expected pixel format, in order to find the best mode for your application.
(you may easily extend these two functions, for instance, by adding stencil buffer bits and depth buffer bits)

BTW, any GL call including glGet*() function will be failed if OpenGL rendering context is NOT created yet,

For Windows, the context can be initialized by wglCreateContext() and wglMakeCurrent(). Therefore, OpenGL functions must be called after this.

Dear friends
I have solved the problem from NVIDIA settings. I swithed off the virtical sync property from advanced setting and get a FPS almost 6 times than that of my onboar graphics. Is there anything i can do from my code to solve this problem.

with regards

Yes, this extension is used to switch swapbuffer’s vsync behaviour on Windows:

He he, little did I realize that the frame rate went from refresh speed to ludicrous speed.

vsync is desirable IMHO and you should leave the users with the option of using your software locked to the refresh rate through their desktop settings.