Ensure S/W or H/W MCD ,ICD acceleration??

Hi to all,

It may be a basic doubt.I don’t know any thing abt Drivers.Which acceleration is the best ?I read that H/W ICD.If so How i can set it ? Becoz rendering speed of my application is very slow.I have to render around 15 laks of triangles.How i can increase the speed ?

The PIXELFORMATDESCRIPTOR structure is given below:-
sizeof(PIXELFORMATDESCRIPTOR), // size of this pfd
1, // version number
PFD_DRAW_TO_WINDOW | // support window
PFD_SUPPORT_OPENGL | // support OpenGL
PFD_DOUBLEBUFFER | // double buffered
24, // 24-bit color depth
0, 0, 0, 0, 0, 0, // color bits ignored
0, // 8 bit alpha buffer
0, // shift bit ignored
0, // no accumulation buffer
0, 0, 0, 0, // accum bits ignored
24, // 24-bit z-buffer
0, // stencil buffer
0, // no auxiliary buffer
PFD_MAIN_PLANE, // main layer
0, // reserved
0, 0, 0 // layer masks ignored

When i check the accleration it shows ‘Software’.The code is

int generic_format = pfd.dwFlags & PFD_GENERIC_FORMAT;
int generic_accelerated = pfd.dwFlags & PFD_GENERIC_ACCELERATED;

 if (generic_format && ! generic_accelerated)
   // software
 else if (generic_format && generic_accelerated)
   // hardware - MCD
 else if (! generic_format && ! generic_accelerated)
   // hardware - ICD

The display card is NVIDIA RIVA TNT2 Model 64/Model 64 pro.When i check the render and vender using glGetString()it returns the card name correctly not ‘Microsoft’.

How i can set to H/W accleration or any way to increase the speed of rendering ?

Sorry for bad English and thanks for any input.


You may need to ask for a 32bit pixel color format to get a 24bit zbuffer in an accelerated pixelformat.

Thanks for ur reply.
It shows Hardware ICD now.It is the best accleration ? But i am not getting much improvement in the speed of rendering.

Any other techniqu for improving the performance ?
The initialization code of opengl is

GLfloat lit0_ambient[] = {m_fAmbientLightValue, m_fAmbientLightValue, 0.0f, 1.0f};
GLfloat lit0_diffuse[] = {1.0f, 1.0f, 1.0f, 1.0f};//GLfloat lit0_diffuse[] = {1.0f, 1.0f, 1.0f, 1.0f};
GLfloat lit0_specular[] ={1.0f, 1.0f, 1.0f, 1.0f};
GLfloat lit0_position[] ={1.0f, 1.0f, 1.0f, 0.0f};

// Light0
glLightfv(GL_LIGHT0, GL_AMBIENT,  lit0_ambient);
//glLightfv(GL_LIGHT0, GL_DIFFUSE,  lit0_diffuse);
//glLightfv(GL_LIGHT0, GL_SPECULAR, lit0_specular);  
//glLightfv(GL_LIGHT0, GL_POSITION, lit0_position);
glEnable( GL_LIGHT0);

GLfloat global_ambient[] = {0.2f, 0.2f, 0.2f, 1.0f};

glLightModelfv( GL_LIGHT_MODEL_AMBIENT, global_ambient);


// Cull the back faces of polygons
//glEnable( GL_CULL_FACE );
//glCullFace( GL_BACK );

// specify black as clear color
glClearColor( 0.5f, 0.5f, 0.5f, 1.0f );

// specify the back of the buffer as clear depth
glClearDepth( 1.0f );

 // enable depth testing
glEnable( GL_DEPTH_TEST );

I am using the Orthographic projection for projection as well as for zooming also.



If you disable two sided litghing you’ll probably see a large increase in performance. However, it’s hard to tell you how to improve performance without knowing what the bottleneck is. Google for the OpenGL performance FAQ (I think it’s on this site somewhere) and you’ll find info how to determine if your app is fill, cpu or geometry bound.

Will disabling FRONT_AND_BACK lighting really make a difference with culling enabled? Wouldn’t the back side be culled before performing the lighting calculations?

No, back face culling usually happens after lighting. Besides, enabling two sided lighting is very likely to cause a software fallback for geometry processing (on most consumer hardware at least) which will lead to lower triangle throughput.