In the 3.3 core spec, I found this information about glGetError in section 2.5:
The command “enum GetError(void);” is used to obtain error information… To allow for distributed implementations, there may be several flag-code pairs.
What is a distributed implementation in regards to OpenGL? Is it basically an OpenGL implementation that has been shipped/distributed with an operating system, or is there more meaning to it? Not sure if there’s any special meaning behind that. I tried googling “what is a distributed implementation”, but I only got results on something called a distributed system, which is apparently “a number of independent computers linked by a network”, from a definition lookup. Is a distributed OpenGL implementation an OpenGL implementation for a distributed system then? Just curious about the terminology.
OpenGL is specified to use a client/server architecture, with some operations specified as client operations and others as server operations. Most typically the client and server are on the same machine, but they can be on different machines. That’s a distributed implementation.
Thanks for the help. That definition would nicely fit in with the “distributed system” idea I found if the client and server are on different machines.
It may also refer to the case of having multiple processing units. The specification allows each processing unit to maintain its own error status, rather than requiring atomic updates to a centralised state.
That’s interesting. Had to look up “atomic” as a programming term. Do you mean multiple processing units as having the client in one processing unit and the server in another, or do you mean having the server as being multiple processing units?
I’ve thought of the client-server model in OpenGL as just “one machine/processing unit as the client” and “one machine/processing unit as the server” (I assume processing unit and machine can mean the same thing.) That sounds like a cool idea if a server can represent multiple machines/processing units in one OpenGL application. Sounds like it could be useful for super graphics-intensive programs.
I’m thinking in terms of multiple GPUs or multiple “modules” within the GPU(s).
Without the allowance for multiple error flags, it could end up being necessary to synchronise components for no reason other than error reporting. I.e. if multiple components generated an error, you’d need to be able to figure out which one happened first (in terms of which command was being processed). Which is potentially significant effort for not much benefit.
Oh ok. That makes sense. I thought you meant CPU when you said processing unit. If I consider in one computer the client on the CPU, and the server as multiple GPUs , and the program on the client sends GL commands for all the GPUs to run, and if they all return errors, it probably wouldn’t be worth the effort to synchronize their functionality just so you can figure out which error came first. That’s probably why in the spec it says glGetError returns errors “in unspecified order”, the error itself is more important than when exactly it was sent.
Thanks for the replies. I understand “distributed implementation” more. One GPU is enough for me, but it’s cool to know it’s possible to render to multiple GPUs with OpenGL. I also feel like it makes more sense now to consider the client on the CPU and the server on the GPU. Was thinking of client and server only being on the CPU.