I’m programming an application witch loads a WRML file with a scene. So I have a list of vertices and a list of faces. Every vertex can be used by many faces, thus WRML provides different normals and texture coordinates every time it’s used.
The problem is I would want to use glDrawElements() witch is faster than thousands of calls to glVertex, glNormal, glTexCoord. But with that function i must specify arrays that can provide only one normal and texture coordinate for every vertex.
One solution could be repeating vertices, so if they are average used by 3-4 faces each one, I would use 3-4 times bigger memory… I think it’s not a tricky way of coding… Could you help me? Thanx
That is what you will have to do. Indexed arrays are indexed by one index, so you will have to repeat the position if two unique vertices share locations.
I just found the answer recently when
I read the gl extensions of NV.
one extension you would like:
You will be satisfied with it.
We are talking about “EXT_vertex_array”. While it was an extension in OpenGL 1.0, it became part of the language in GL 1.1. I don’t really know why nVidia decided to put the old extension in their implementation, since it should be using the same functions as in the OpenGL 1.1 functions.
perhaps cause old games that are coded with 1.0 and using the extension can run even on gf3?
Are there any such games? I’m pretty sure that Windows always shipped with OpenGL 1.1. Not that many OpenGL games. I’m sure that id were using the most up to date spec.
But maybe some other apps, ported from other platforms - CAD, animation, that sort of thing.
EXT_vertex_array is actually slightly different than the OpenGL 1.1 vertex array functions.
I believe we didn’t even have to add it – it’s in the SI, I think.
It would aid people who ported old OpenGL 1.0 apps, I suppose.
Kids stop fighting and try to help some.
There is no way in OpenGL to use vertix arrays avoiding the waste memory. (Yet! Vertex Programs/Shaders could help a bit.)
But remember, glDrawElements is a quite complex and expensive call. (I think it even includes some of the glBegin() & glEnd() cost. Some implementations (NV?) also buffer the data in the drivers mem, because the data could change in RAM.)
You could try to put the VRML data vertices, into a small buffer to avoid a lot of glVertice calls. Then render this buffer to OpenGL. This is probably very slow, since the big speedup from glDrawElements, is gained from the reuse of repeated verices.
Another way is to position the vertex, normal, and texcoord- pointer, and then render. Could be fast, because everything gets rendered in one call, but I guess not.
Using 3 glVertexv(), 1 glNormalv(), and 3 glTexcoord*v() should be the best way. Remember to use the vector calls, to use the CPU registers better.
how supid I am! I know i was wrong at once.
well…no way to avoid wasting the mem…