What are the relevant components of an OpewGL context?

This may be a bit of an overlap with another question I asked earlier (and which is still in progress) but perhaps this is a better way to ask it:

When you call glDrawXXX(), several different pieces of data are referenced implicitly, even though they’re not parameters to the draw command. These include:

[li]The vertex data (xyz/uv/normal) in a VBO via glBindBuffer
[/li][li]The index list into the vertex array(56,58,59,56) if one has been specified. These are “passed down” to the context in a VBO by glBindBuffer and (possibly) glAttribPointer. At least, I think so.
[/li][li]The shader program, if any. This is “passed down”, not in VBOs, but just by number, to the context by glUseProgram()
[/li][li]Textures(grass1.png, fur11.png). These are passed down in a VBO via glTextureBuffer() and glTexBuffer(), which are different but not clearly differentiated.
[/li][li]Probably other stuff as well?
Several of the function definitions come close to confirming this.

Under gluseProgram it says
glUseProgram installs the program object specified by program as part of current rendering state
under glTexBuffer() it says
glTexBuffer attaches the data store of a specified buffer object to a specified texture object

But neither of these actually defines “install” or “attach”, so it’s still unclear. Likewise “current rendering state” may also mean “OpenGL context”, but that’s not clear either.

I see that several of these calls take a “target” parameter, but the meaning isn’t immediately obvious. glBindTexture takes a “target”, one of which can be GL_TEXTURE_2D, but I know you can have more than one 2D texture used to draw an object, so that has to be a “type” and not a “purpose”.

I expect that things are set up the way they are so the maximum amount of reuse can take place (set up a texture, which gets inhaled into the graphics card, then use it over and over for various glDraw calls) although just as obviously, the integer buffer identifiers, integer texture identifiers, and integer program identifiers could be explicit parameters, and the graphics card could do an internal lookup. I’m thinking there’s something in here I’m just not getting.

I’m just trying to chase down all the “implicit” stuff so I can get my arms around the general concept.

Can someone tell me if I’m pointed in the right direction?

Thanks in advance for any insight.

I’m on mobile so I’ll just very quickly deal with the GL_TEXTURE_2D part.

This is another of those artefacts of older versions of OpenGL that no longer makes the kind of sense it used to.

In this case, the very first version of OpenGL, 1.0, didn’t actually have support for texture objects at all. There was a single texture and that was all you could use. If you wanted more you could work around it by doing evil things with display lists, but otherwise you could only load and use one texture.

However that texture could be configured to operate in a number of different “modes”; specifically 1D or 2D. Other modes came with later versions. So if you load a 1D texture you specify GL_TEXTURE_1D everywhere and your object will be textured by your 1D texture. Likewise with 2D.

Because OpenGL likes to evolve in a minimally invasive manner that preserves the functionality of older versions, so that a legal GL 1.0 program remains legal under later versions, when texture objects were added in version 1.1 (they were available as an extension somewhat earlier) the existing API was retained and the concept of “binding” was invented. Now you bind a texture object and all future texture API calls affect that object. Or bind texture object 0 to revert to the OpenGL 1.0 behaviour.

GL_TEXTURE_2D (and friends) are therefore primarily legacy cruft; they exist for GL 1.0 compatibiliy, but there are also places, even in the modern API, where they do have their use - specifying cubemaps and array textures is one example.


you can use opengl many different things, most common is to draw things. to draw “things”, you need a “program object” that does all the calculations on the gpu. when done, it will write the results (color values for each pixel on screen) in to a “framebuffer object”.

there is a “fixed-function” program already by default, you dont need to create your own, but it is very recommended if you want to control how things get drawn

there is a “default framebuffer” already provided with the gl context (window), you dont need to create your own “FBO”, but there are cases when you want to provide more “layers” to draw (arbitrary) results into and / or you want to modify te format of the resulting data. but everything you “see” on screen is what the default framebuffer contains, so you finally have to copy the results of your FBO into the “default framebuffer” if you want to see them.

read this page

in short:

there are 2 mayor ways to draw things:

  1. direct drawing
    –> YOU tell the program object what part of buffer data to use
  2. indirect drawing
    –> a buffer object contains that info already (the GL_DRAW_INDIRECT_BUFFER)

each of these both ways can pull “geometry data” again in 2 different ways:
a) array data drawing
–> GL only uses vertex data to generate primitives (lines, triangles, quads, etc)
b) indexed data drawing
–> GL uses vertex data AND index data to generate primitives (lines, triangles, quads, etc)

the latter has the advantage that it most likely will consume less memory and can reuse some vertex data

vertex array object VAO:
describes how the index / vertex data is layed down in the buffer objects, beside other things (like how often per vertex / instance data will get drawn from buffers, see “instanced rendering”)

texture object:
are just data containers in a special memory layout / format. for some reasons i cant tell / dont know, GL has put an additionally “sampler object” (its like the VAO regarding vertex data: it tells HOW to read data from the texture, how to interpolate etc) into “texture objects”. so when you set some “texture parameters”, some just modify the sampler within the texture, but not the data

finally openGL is run asyncronously on the GPU, so it is likely that when a gl* function returns, it hasnt even begun with execution, the instuctions have instead been “accumulated” internally until the graphics card driver decides that “there is enough work to do so lets start rendering!”.

data “uploads” / “downloads” (client-server-model: you [the programmer] are the “client”, GL is the “server”, it “provides” you with “things” you’ve requested from it) are often expensive, “downloads” can stall the graphics pipeline, which will result is poor performance.


by the way: there have recently been added some new kind o functions called “direct state access” (DSA)

glTextureBuffer() and glTexBuffer() are doing the same:

glTexBuffer() wants to know from what texture target (must be GL_TEXTURE_BUFFER) you want to draw data, that implies that you first have to bind the buffer object to that target!
glTextureBuffer() (= the DSA version) wants to know from what texture object you want to draw data