(ATI) glDrawRangeElements with VBOs: Memory leak?

I have found that with my application, using VBOs and glDrawRangeElements to render objects causes (CPU) memory to constantly decrease (the more objects, the faster).

If I comment out the call to glDrawRangeElements but have all the other code be executed, no memory gets consumed.

Any idea what this may be causing this and how I can get rid of it?

Hardware: Radeon 4870 1GB, WinXP SP3, 4 GB RAM, Catalyst 9.4

Edit:

Status unchanged with Cat. 9.6

I cannot detect anything unusual in my code.

My code is about:


glGenBuffersARB (1, &m_vboDataHandle);
glBindBufferARB (GL_ARRAY_BUFFER_ARB, m_vboDataHandle);
glBufferDataARB (GL_ARRAY_BUFFER, m_nFaceVerts * sizeof (CRenderVertex), NULL, GL_STATIC_DRAW_ARB);
m_vertBuf.SetBuffer (reinterpret_cast<CRenderVertex*> (glMapBufferARB (GL_ARRAY_BUFFER_ARB, GL_WRITE_ONLY_ARB)), 1, m_nFaceVerts);

glGenBuffersARB (1, &m_vboIndexHandle);
glBindBufferARB (GL_ELEMENT_ARRAY_BUFFER_ARB, m_vboIndexHandle);
glBufferDataARB (GL_ELEMENT_ARRAY_BUFFER_ARB, m_nFaceVerts * sizeof (short), NULL, GL_STATIC_DRAW_ARB);
m_index.SetBuffer (reinterpret_cast<short*> (glMapBufferARB (GL_ELEMENT_ARRAY_BUFFER_ARB, GL_WRITE_ONLY_ARB)), 1, m_nFaceVerts);

After that, the buffers are initialized and once that’s done, they are unmapped.


glUnmapBufferARB (GL_ARRAY_BUFFER_ARB);
glBindBufferARB (GL_ARRAY_BUFFER_ARB, 0);
glUnmapBufferARB (GL_ELEMENT_ARRAY_BUFFER_ARB);
glBindBufferARB (GL_ELEMENT_ARRAY_BUFFER_ARB, 0);

Before rendering, the VBO buffers are of course bound again. Here’s the call to glDrawRangeElements:


glDrawRangeElements (GL_TRIANGLES, 0, pm->m_nFaceVerts - 1, nVerts, GL_UNSIGNED_SHORT, 0);

pm->m_nFaceVerts - 1 is the max. vertex index value of the entire model.
nVerts is the count of the vertices to be rendered.

I am getting no errors returned from OpenGL (checking them all over the place). Same problem when using glDrawElements instead.

Well, maybe nobody knows how to help me or is interested in this subject, but I found that when directly copying a CPU memory buffer to GPU memory using glBufferDataARB instead of allocating the memory driver side, mapping it, initializing it and unmapping it again, no continuous memory consumption occurs. I find this somewhat weird …

Are you saying you use glBufferDataARB to update the buffer each frame and the memory consumption goes up?
That can be considered a bug since the driver should create a new buffer everytime glBufferDataARB is called and later on will delete the old buffer.

Purely out of interest, what happens if you call glBufferDataARB with NULL before you send the new data?

I understood from what you said that you only call glDrawRangeElements at each draw call. But from what others said, they seem to imply that you are somehow updating data with glBufferData before rendering… need some clarifications :slight_smile:

If you use glDrawElements instead glDrawRangeElements, do you notice any change?

glDrawBufferARB is only called once per data set (object) to send the data to the driver. Subsequently, only glDrawRangeElements is called for each object with the objects corresponding VBO and index buffer handles bound.

I do not call glDrawBufferARB before each render call.

I was also initially allocating the buffer in the driver with NULL as the buffer address argument, mapping it to CPU memory, initializing it, then unmapping it. That caused memory to be consumed during each glDrawRangeElements call.

What I am now doing is to allocate a buffer in CPU (application) memory, initialize it, then send the data to the driver by calling glDrawBufferARB with the buffer pointer as data address argument. That works, no memory loss.

I already wrote that using glDrawElements instead of glDrawRangeElements didn’t fix the memory leak problem.

by “glDrawBufferARB” you mean “glBufferData”? It does not make any sense otherwise.

So you say that when initializing the buffer content with the mapping method you get memory leaks, but not doing it with glBufferData with a non-null pointer. Am I right?

It looks like a driver bug… but I am surprised that it has not been noticed earlier, it is really serious IMO. It would be interesting to see if this happens on nvidia hardware.

My understanding of calling glBufferData with NULL was that it allowed a new Buffer to be allocated, while the previous one that may or may not be still in use gets cleaned up by the drivers later… Allowing you to not stall on the CPU side when you send new data.

That’s how I visualized it. From your description karx11erx it seems that when you do the mapping to memory something is going awry.

At first glance it seems like a bug of some sort, but like dletozeun I am surprised it has not been spotted before.

If you don’t do the memory mapping stuff what happens?

Yeah, glBufferDataARB, not glDrawBufferARB.

Well, as far as I can tell, my initial implementation was to the book:

  • get a VBO handle
  • bind it
  • allocate a VBO driver-side
  • map it
  • initialize it
  • unmap it
  • use it via its handle

Have you tried anything like gDEBugger before?

It might be worth running your app through that and seeing what is happening with buffers and stuff…
You can basically step through your own code (line by line) and watch the OpenGL state, as well as monitor all the various buffers and so on…

AFAIK gDEBugger has a 30 day free trial.

I found a workaround, so there’s no need for that. I rather wanted to check whether I had made an obvious mistake when creating, initializing and using VBOs.

scratt, as karx11erx already said (though it was not clear due to glDrawBuffer/glBufferData confusion :slight_smile: ), when he doesn’t use map/unmap stuff but only glBufferData to allocate and fill the buffer there are no memory leaks.

I agree with scatt suggestion. Also, try to reproduce the bug in the simplest glut or equivalent program to see whether it is caused by a driver bug and in this case report it.