hello everyone!
I’ve got a problem with render-to-vertex-array functionality and I hope someone can bring some more light to it. :rolleyes:
I’m trying to use render-to-vertex-array functionality in a platform independent way (for Windows and Linux). The idea is to pass the geometry to the graphics card (NVidia GeForce 6600GT) as a texture, render it to a quad with a fragment program (with GLSL), read it back from the frame buffer into graphics memory and either alter it with a new vertex program or render it directly as a vertex array with the standard OpenGL fixed functionality.
I’ve searched quite extensively and finally I’ve found out that the extension EXT_pixel_buffer_object should allow me to do exactly what I want… but (there’s always a but) I can’t get anything but a black screen… and with an OpenGL debugger I only can assure that my texture (GL_RGB) with the geometry is properly loaded on the card. So it must be either a problem with the read back of the framebuffer to the buffer object (glReadPixels) or with the actual rendering of the vertex array.
Here is a fragment of my code illustrating the problem:
// FIRST PASS: Render texture to a quad
//---------------------------------------
glUseProgramObjectARB( myGLSLProgram );
GLuint myBufferIDs = new GLuint[1];
glGenBuffersARB( 1, myBufferIDs );
glBindBufferARB( GL_PIXEL_PACK_BUFFER_EXT, myBufferIDs[0] );
glBufferDataARB( GL_PIXEL_PACK_BUFFER_EXT, 3*numOfVertices, NULL, GL_DYNAMIC_DRAW );
glBindBufferARB( GL_PIXEL_PACK_BUFFER_EXT, 0 );
glDrawBuffer( GL_FRONT );
renderTextureToQuad();
// SECOND PASS: Read back from framebuffer and render vertex array
//----------------------------------------------------------------
glUseProgramObjectARB( 0 );
glBindBufferARB( GL_PIXEL_PACK_BUFFER_EXT, myBufferIDs[0] );
glReadBuffer( GL_FRONT );
glReadPixels( 0, 0, textureSize, textureSize, GL_RGB, GL_FLOAT, (char *)NULL );
glBindBufferARB( GL_PIXEL_PACK_BUFFER_EXT, 0 );
glBindBufferARB( GL_ARRAY_BUFFER, myBufferIDs[0] );
glEnableClientState( GL_VERTEX_ARRAY );
glVertexPointer( 3, GL_FLOAT, 0, (char *)NULL );
for( int stripNr = 0; stripNr < totalNumOfStrips; stripNr++ ) {
glDrawArrays( GL_TRIANGLE_STRIP, firstVertIndx, thisStripNumOfVerts );
}
So, what I’m doing is pretty much a reproduction of the example from the extension specification and I think it should work, but it doesn’t. I’ve tried both on Windows and on Linux with no success.
Does anyone have any experience with this extension and render-to-vertex-array? Or any other sugestion how to do such thing without PBOs? Supposed that I’m doing something wrong and the extension can be actually used for this purpose, is there any restriction on the use of vertex arrays (indexed verts, interleaved attributes…)?
Thanks in advance for any reply