long as the lastest GPU capabilities and fastest access methods are available to developers,
See, if all the old cruft stays forever, every new extension has to be cross-checked against every old feature. And even if the application sticks to the ‘fast path’, the driver MUST assume, that you COULD make use of old features ANY time, so this adds extra code (which can contain bugs) which should not be needed in first place.
and there is clear guidance on what those fast paths are and how to use them
Not true. Where is this clear guidance? Truth is, there are a lot of assumptions and myths on the internet on how to achieve maximum performance with OpenGL today, but if you then try them out, it often doesn’t work that way. For instance, I recently tried to replace all Matrix Stack related code in my own engine in a “pure” GL3.1 manner with UBOs. Until recently I could not match glLoadMatrix’s performance, because UBO buffer updates slowed me down to a crawl. All the “praised” methods on how to update a buffer object with new data didn’t work.
I will report about that in a few days. But this example again shows, that there is no clear documented way on how to achieve the maximum performance for this (and many other) scenarios.
there’s no need to stab those with working codebases in the back
Somehow this false prejudice stays in the mind of people :mad: The deprecation model will not brake existing code!!!
That’s the Microsoft DX mindset. They’re a monopoly. They can do that.
Well, the success of DX clearly shows, that this mindeset actually works better than sticking to the past forever.
And people often forget that just because DXn comes out, DXn-1 will not just vanish. If you are unwilling to change anything, you can stick to your favourite DX version if you want. But in reality, people are mostly looking forward the next DX version, eager to try out the new features it brings.
Real commercial apps? Ha! Hardly. They have better things to do with company profit than re-invent the wheel, just because some bigshot supplier company said so.
Its sometimes astonishing how little the developers of these actually know about modern OpenGL, because they just don’t feel the pressure to keep up with the development of technology.
Seeing a video like this:
in year 2009 makes me just laugh. Wow… they actually made it to get from immediate mode rendering into VBO based rendering 6 years after the introduction of VBOs
Switching from IM to VBO is certainly not reinventing the wheel… it requires redesigning the whole renderer. And obviously it has been rewarded with much better performance. And the customers will sure like it.
So long as the older GL features don’t get in the way of the fastest access methods, who really cares if they’re still there.
I am not a driver programmer at nVidia or ATI, but my gut feeling as programmer just tells me that having to support (i.e. emulate) every old feature bloats the code; and more code means more bugs. Additionally, having to assume that the API user might trigger any of the old obscure features any time, means having to do checks for it all around the code.
nVidia claimed some time ago, they spend around 5% of the driver time for GL-object-name checks and lookups. That is why Longs Peak would have introduced GL-provided handles for objects. The necessary checks/hashmaps/whatever could just go away with it, leaving faster and more stable driver code.
GL3.0 tries to get close to this by forbidding to provide your own names for textures and any other GL objects.
But, surprise, suprise ARB_compatibility reintroduces name-generation-on-first-bind semantics. The driver just has to assume that I create my own names and therefor all that 5% overhead stays in the driver: A clear example where old features get in the way of faster methods.