is D3D way of doing things.
You seem to say that as though the D3D way of doing things is categorically bad.
How would you solve the push/pop issue? If you forget one of them, finding the source is a problem.
Pushing and poping can be dealt with relatively easily in C++. If you encapsulate the push/pop calls in a C++ object (constructor pushes, destructor pops), you’ve got nothing to worry about. Not that I’m suggesting that the internal GL expose such an API, especially since it takes all of 2 minutes to write such a class.
Also, I guess it can happen that the driver won’t report the error???
Setting a texture parameter on the wrong object is not something that the driver can detect as an error. Setting the parameter on any object is legal; it is a symantic error, not an API error.
You need established names. Making object creation automatic is just icing on the cake, removing that capability doesn’t make it much easier. Instead of creating the object you’d throw an error.
No, if glGen* had control over the “names”, then it could generate names as it saw fit. The names in question could be actual pointers, or something that converts into a pointer after one quick memory access, or a relatively short lookup.
Because GL is forced to accept any object name regardless, the implementation must have some way to map any arbitrary object to a pointer to the internal object. Rather than having a simple function or even a cast operation, it becomes a complex search operation.
But you still need to check whether you ‘know’ the object the client app wants to act on.
Kind-of. I would actually prefer a debug and release version of the implementation. In debug, it can do glError and so forth checks to make sure that the texture object name really exists. However, in release, it should not even bother; just produce undefined behavior/crashes. Granted, that’s somewhat wishful thinking, but it would provide a negligable speed increase in situations where bindable object state is constantly in flux (which is, admittedly, not that frequent).
Think about Win32 programming (if you’ve ever done any). HWNDs are handles to a window. You have to call a function to create a valid one. If you call a Win32 function with an invalid HWND, it will fail (in debug, with an error of some kind). You don’t call a “bindHWnd” function; you just use the current one. It doesn’t impose much overhead in terms of searching because the contents of a HWND are controlled by the OS. It can put whatever info in a HWND that it takes for search/validation times to be low.
The real question is, if you had OpenGL to write all over again, from scratch (as a C-based API), with no consideration as to backwards compatibility, would you continue to use the current paradigm or would you switch to the one used by shader objects? I think, if the ARB had it to do over again, they’d go for the shader object (ie, object-based) version.