glDeleteBuffers called implicitly immideatly after buffer creation (Modern OpenGL).

Hello folks! Recently I stumbled upon this very weird issue where I try to create a buffer but then it’s immediately after. I definitely do not call it. What could be causing this weird behavior. Thanks in advance!

RELEVANT PART OF APITRACE DUMP

4420 glCreateBuffers(n = 1, buffers = &1)
4421 glDeleteBuffers(n = 1, buffers = &1)

These next 2 code blocks are from 2 separate code blocks but they are executed back-to-back with no code executed in between. It’s also worth noting that I double checked and the pointer is valid throughout.

if(isInstanced){
	instanceVBO.reset(new GLuint);
	instanceCount.reset(new uint32_t);
	glCreateBuffers(1, instanceVBO.get());
	printf("%i\n", glIsBuffer(instanceVBO.get()[0])); // THE BUFFER IS VALID HERE
}
void Sprite::UpdateInstancedArray(std::vector<glm::vec2> instances){
glBindVertexArray(0);
std::vector<float> instancesRaw;


for(glm::vec2 instance : instances){
instancesRaw.push_back(instance.x);
instancesRaw.push_back(instance.y);
}


glBindBuffer(GL_ARRAY_BUFFER, instanceVBO.get()[0]);

printf("%i\n", glIsBuffer(instanceVBO.get()[0])); // BUT NOT HERE??
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*instancesRaw.size(), instancesRaw.data(), GL_DYNAMIC_DRAW);

Also if I just remove the glCreateBuffers part entirely OpenGL still tries to delete something…

4420 glDeleteBuffers(n = 1, buffers = NULL) // incomplete

OpenGL didn’t call glDeleteBuffers; you did. Put a breakpoint on all your glDeleteBuffers calls and see what comes up.

I guarantee you that it was something like this.

Don’t try to manage OpenGL objects with RAII; it won’t work.

For a start, OpenGL object names are references, not objects, and would need to be managed as references. I.e. handling copy/move correctly; not doing this is almost certainly the cause of your immediate problem.

But beyond that, OpenGL objects aren’t free-standing objects; they are associated with one or more contexts. You can only delete them when the context (or one of the contexts) to which they belong is bound. Trying to shoe-horn them into an RAII framework usually breaks down on application termination, as you end up calling the destructors after the context has already been destroyed. And it will usually break down much more visibly if you have more than one context.

Nonsense. You just have to pay attention to it. It’s a good learning experience for working with C++ in general.

This is only true if you have a bunch of global objects lying around. If your code is well-structured, this isn’t much of a problem.

Some code owns the C++ objects which own OpenGL objects. You just need that code to release ownership of them before destroying the context.

Technically correct, but IMHO this is one of those “if you have to ask (or be told), you probably shouldn’t do it” situations.

Aside from the context issue, it’s fine if you manage names like smart pointers, i.e. either refcounted like shared_ptr or non-copyable like unique_ptr (or its predecessor auto_ptr). It’s the opposite of fine if you add a destructor which calls glDelete* but otherwise treat the name as a POD type; which is probably what OP is doing. See also Rule of Three (which became the Rule of Five with the addition of move semantics).

This is the more typical way to do it. I.e. any OpenGL objects are owned by higher-level objects (which are typically non-copyable, although a deep-copy also works).

This is fine if your code is responsible for destroying the context, rather than simply dealing with an after-the-fact notification that the context has already been destroyed.

Also: while managing OpenGL objects with RAII is awkward in C++, it’s essentially impossible in languages with delayed finalisation (where destructors get called at the point the runtime eventually gets around to running the garbage collector).