Will Vulkan effectively "replace" OpenGL or not?

The purpose of the thread is to achieve a clear answer to the question in the thread titles as there seems to be a lot of claims, doubts and dubious statements about OpenGL and Vulkan out there at the moment.

From what I have seen in other forums, news reports about Vulkan and internet discussions, there seems to be a general notion that OpenGL would be a replacement for Vulkan. For example a popular question on stackexchange asks about this and a number of news websites that I will not cite here have made similar remarks either in the headline or their texts.

However, from what I understand from discussions here and the info given in the presentation, Vulkan is supposed to be a low-level API that will exist alongside OpenGL. It was announced there will still be major updates to OpenGL in the future and the support for OpenGL will continue. Also, another thread in this forum discussed that Vulkan might be overbearing for most computer graphics programming beginners, and that they would be better off learning, e.g. OpenGL 3.2+ Core Profile, or similar. In addition, as far as I understand it might not even be beneficial in a lot of cases to use Vulkan over OpenGL, for example in cases where the application/rendering runs in a single thread or when reaching the best possible performance is not a primary goal for an application. It seems to me that Vulkan is targeted at AAA video game developers, game engines, console game developers and high-performance rendering. This list does not include a lot of industry use-cases in which OpenGL might be a better solution.

On the other hand, someone suggested that driver support for OpenGL by graphics card developers might decline in the future due to its complexity, and compatibility will be reduced, meaning that Vulkan will be the future in the long run. Is this a likely case?

openGL will continue to exist if only as a legacy API that you could implement on top of vulkan.

Nearly all next-gen rendering APIs (D3D12 and Metal) are using a similar command buffer based system.

Here are the facts on the ground:

1: The Khronos Group is on record: they plan for OpenGL and Vulkan to co-exist.

2: Vulkan is intended to target hardware capable of OpenGL ES 3.1 or OpenGL 4.1 or better. Hardware incapable of these will likely not be able to implement Vulkan.

3: Vulkan does not (as far as we know) provide access to any hardware functionality that OpenGL 4.5 is not also capable of accessing (plus ARB_bindless_texture).

Fact #1 really means very little. Plans change, and such a statement could simply be rhetoric to dissuade people from panicking, thinking that they must immediately jump to Vulkan or be left behind.

Fact #2 is more important. Even if Khronos immediately reneged on their plan and every IHV ditched OpenGL support on Vulkan-capable hardware, the fact remains that there is a ton of hardware out there that can’t support Vulkan. Valve’s Steam survey gives us a good picture of the desktop realm. GL 4.1 is approximately D3D11, and while that covers a fair portion of existing hardware, it’s not even half of every computer in the survey. The mobile space is even worse shape, as ES 3.1-class hardware is pretty cutting edge, only GPUs released in the last year or so.

So for the immediate future (2 years or so), OpenGL’s not going anywhere. There’s just too much need to support older hardware.

Fact #3 is relevant, as this means that Vulkan usage will be primarily about performance, not functionality. Which also means that OpenGL is more-or-less complete, relative to the current landscape of GPUs. ES will be updated as mobile hardware plays catchup with desktop. But desktop GL is mostly done.

And that means that Khronos doesn’t really have to do much to keep OpenGL up-to-date. It will mostly be about IHVs providing implementations. And they will have a lot of incentive to keep OpenGL implementations working than Khronos does. They don’t want to break old programs. So even if Khronos’s commitment to OpenGL falters, it will still exist and still have implementations for some time to come.

On the other hand, someone suggested that driver support for OpenGL by graphics card developers might decline in the future due to its complexity, and compatibility will be reduced, meaning that Vulkan will be the future in the long run. Is this a likely case?

Define “in the future”. In two years, I would expect to see OpenGL implementations still around. In 5 years? In 10 years?

10 years ago, we didn’t even have smartphones. So guessing what will happen so far into the future is not viable.

However, consider this.

The core/compatibility distinction in OpenGL launched in GL 3.2. This was back in 2009. As of yet, not one implementation of OpenGL on Windows ships without the compatibility profile. Not Intel. Not AMD. Not NVIDIA. Oh sure, Linux and MacOSX have core-only implementations. But not Windows.

Why? Because too much software relied on it.

At the end of the day, IHVs exist to sell GPUs, and breaking mission-critical software doesn’t sell GPUs. IHVs may stop updating their implementations, putting them in a form of Microsoft’s “extended support”. They may become more and more bug-prone due to less testing. But I doubt they’ll stop in the foreseeable future.

Thanks Alfonse, this answers most of my questions and your points seem well reasoned.

I remembe that there were some rumours back in the days that the OpenGL 3 version didn’t see the big changes that some of us might have expected due to certain interest-groups not wanting big changes to the API to occur - and that this was the reason there was no move towards a more object-oriented (or at least struct-based) solution. If this, and keeping compatibility, was an argument back then, then I do not see why it would not be an argument anymore nowadays.

However, that opens up another fear I have about OpenGL, namely that it might, after the Release of Vulkan, by the vendors only be seen as a compatibility graphics API, existing only to support the “old implementations”, and thus not worthy of real updating or bug-fixing. This would result in cases where issues that occur in a few (technically fine, but atypical and therefore maybe not properly tested) use-cases might not be fixed. Also performance optimisations in drivers might not happen as much as before for OpenGL. I think this is what you also thought when you said that future implementations could be more “bug-prone”. That would probably be the case if vendors saw OpenGL in the way described, which is not necessarily the case. At least in my opinion OpenGL should deserve full support since it is a high-level API and as such fundamentally different from the Vulkan approach. I would not mind, however, if a (i think there have been no talks yet about it even being planned) OpenGL 5 Core version would feature bigger changes in its API, so that it might be closer to Vulkan, basically being a very high-level wrapper around it - if that is even possible without large cutbacks.

Question is if Khronos sees “keeping OpenGL up-to-date” only as adding new (Vulkan-) functionality to OpenGL 4, or actually releasing new major versions as well. If there is no major changes anymore then learning/teaching or making new projects in OpenGL could be seen as riding a dead horse. On the other hand of Khronos clearly commits to future OpenGL Releases, people will see OpenGL for what it will be: a high-level alternative to Vulkan for any application whose rendering is not CPU-heavy. In that case I would also expect the driver support to be good.

The biggest problem with openGL is that its model is not representative of how modern driver+GPU stacks are setup.

All the asynchronous things and it’s client-server model that exist now is built on top of the immediate rendering stack where data is passed immediately from app to GPU. It is also not thread safe. This has far reaching consequences;

for example glBuffer(Sub)Data requires that the driver at least copy the data to another buffer before it can return because the application is allowed to reuse the buffer immediately, same with glUniform* calls. Similarly nearly all glGet* require a round trip to the driver before it returns unless the values are shadowed in user space. Each draw call uses the current state (blend state, current program, etc.) so changes to that state will need to be prevented from changing the effects of drawcalls in transit. When 1 part of the state changes the driver can’t know of other parts will change as well so it often delays committing the changed state until a draw call is used (unless it makes some assumptions).

Vulkan on the other hand is explicitly reentrant (according to Qt’s definition of reentrancy) throughout. Buffer data can only be filled by mapping the buffer to memory and then writing to it or copying from another buffer. Non-mappable buffers are possible (buffers that reside solely on the GPU). All required draw state is encompassed in “render pass” objects which are contained in the command buffer that uses it.

(note there is some speculation here about vulkan’s features)

I remembe that there were some rumours back in the days that the OpenGL 3 version didn’t see the big changes that some of us might have expected due to certain interest-groups not wanting big changes to the API to occur - and that this was the reason there was no move towards a more object-oriented (or at least struct-based) solution. If this, and keeping compatibility, was an argument back then, then I do not see why it would not be an argument anymore nowadays.

Two reasons.

1: Vulkan is a new API with a new name. Therefore, those “certain interest-groups” can’t argue that breaking compatibility will hurt them, since they have no existing code that’s compatible with Vulkan.

2: Vulkan is being devolved not just by a lot of IHVs, but also by a lot of game engine developers. Epic, Valve, Activision/Blizzard, EA, Unity, and others are all quite invested in its success. Vulkan is targeted specifically at their needs, not the “certain interest-groups” you magnanimously choose not to name. So they have very little standing to interfere this time.

However, that opens up another fear I have about OpenGL, namely that it might, after the Release of Vulkan, by the vendors only be seen as a compatibility graphics API, existing only to support the “old implementations”, and thus not worthy of real updating or bug-fixing.

For Windows desktop OpenGL, that is entirely possible. Even so, it won’t happen overnight; it will take several years for bitrot to set in.

Also, NVIDIA is heavily invested in keeping OpenGL functional. So they’re highly unlikely to let their implementation lie fallow in the near term.

Question is if Khronos sees “keeping OpenGL up-to-date” only as adding new (Vulkan-) functionality to OpenGL 4, or actually releasing new major versions as well.

The only “Vulkan functionality” that would be appropriate for OpenGL would be SPIR-V support (which I could easily see going into OpenGL 4.6). The OpenGL ARB has shown an unwillingness to name a new OpenGL major release, even when it is warranted. For example, there is a class of 4.x hardware that supports significant features not provided by 4.5 core (ARB_bindless_texture and ARB_sparse_texture). They would probably be appropriate for a 5.0 release, but the ARB has not made one. And probably won’t; they just tell people to use the extensions.

So I wouldn’t expect to see a new major version of OpenGL until hardware changes substantially.

The biggest problem with openGL is that its model is not representative of how modern driver+GPU stacks are setup.

It should be noted that OpenGL was never representative of how GPUs worked. It was always an abstraction, which could be wrapped around more or less anything. It has only been recently when the problems in the abstraction have become significant enough to warrant fixing.

[QUOTE=ratchet freak;31393]The biggest problem with openGL is that its model is not representative of how modern driver+GPU stacks are setup.
[/QUOTE]

And this is not the intent of an API, to reflect the underlying hardware internals. A well-designed API should focus on interfaces that make sense for the application it’s intended for and usability. Otherwise I would call it driver-wrapper…

[QUOTE=Ident_;31388]The purpose of the thread is to achieve a clear answer to the question in the thread titles as there seems to be a lot of claims, doubts and dubious statements about OpenGL and Vulkan out there at the moment.

From what I have seen in other forums, news reports about Vulkan and internet discussions, there seems to be a general notion that OpenGL would be a replacement for Vulkan. For example a popular question on stackexchange asks about this and a number of news websites that I will not cite here have made similar remarks either in the headline or their texts.

However, from what I understand from discussions here and the info given in the presentation, Vulkan is supposed to be a low-level API that will exist alongside OpenGL. It was announced there will still be major updates to OpenGL in the future and the support for OpenGL will continue. Also, another thread in this forum discussed that Vulkan might be overbearing for most computer graphics programming beginners, and that they would be better off learning, e.g. OpenGL 3.2+ Core Profile, or similar. In addition, as far as I understand it might not even be beneficial in a lot of cases to use Vulkan over OpenGL, for example in cases where the application/rendering runs in a single thread or when reaching the best possible performance is not a primary goal for an application. It seems to me that Vulkan is targeted at AAA video game developers, game engines, console game developers and high-performance rendering. This list does not include a lot of industry use-cases in which OpenGL might be a better solution.

On the other hand, someone suggested that driver support for OpenGL by graphics card developers might decline in the future due to its complexity, and compatibility will be reduced, meaning that Vulkan will be the future in the long run. Is this a likely case?[/QUOTE]

I don’t even think “Vulkan” will have any working implantation. but my worries that OpenGL “development” may stop. CAD application developers should put more pressure on the current ARB or form a new serious ARB to only focus on general purpose OpenGL with focus on CAD applications.

Ignoring the content of your post for a moment, you seem to have contradicted yourself. Twice.

1: If there are never any “working implementation” of Vulkan… why would they stop “development” on OpenGL? You don’t stop making your old product until its replacement is viable. Coca-Cola learned that with New Coke.

2: Furthermore, a “focus on CAD applications”, by definition would not be “general purpose”.

As for the content, Vulkan won’t have any new features compared to OpenGL 4.5 (plus ARB_bindless_texture/sparse_texture/sparse_buffer). So… exactly what “development” of the OpenGL specification will be needed in the immediate future? As long as the drivers still work, it will work. I could understand that you might be concerned about implementations not being updated, but that’s not something the ARB can enforce.

The ARB can’t even make people implement their specs correctly. What makes you think they can make people implement them at all?

If CAD developers want to insulate themselves from the possibility of OpenGL implementations falling by the wayside in 5-10 years, then they should get together and write one on top of Vulkan. That way, they know it’ll always work, and if Vulkan becomes obsolete, they can port it to some other backend.

[QUOTE=Ident_;31392]Thanks Alfonse, this answers most of my questions and your points seem well reasoned.

I remembe that there were some rumours back in the days that the OpenGL 3 version didn’t see the big changes that some of us might have expected due to certain interest-groups not wanting big changes to the API to occur - and that this was the reason there was no move towards a more object-oriented (or at least struct-based) solution. If this, and keeping compatibility, was an argument back then, then I do not see why it would not be an argument anymore nowadays.

However, that opens up another fear I have about OpenGL, namely that it might, after the Release of Vulkan, by the vendors only be seen as a compatibility graphics API, existing only to support the “old implementations”, and thus not worthy of real updating or bug-fixing. This would result in cases where issues that occur in a few (technically fine, but atypical and therefore maybe not properly tested) use-cases might not be fixed. Also performance optimisations in drivers might not happen as much as before for OpenGL. I think this is what you also thought when you said that future implementations could be more “bug-prone”. That would probably be the case if vendors saw OpenGL in the way described, which is not necessarily the case. At least in my opinion OpenGL should deserve full support since it is a high-level API and as such fundamentally different from the Vulkan approach. I would not mind, however, if a (i think there have been no talks yet about it even being planned) OpenGL 5 Core version would feature bigger changes in its API, so that it might be closer to Vulkan, basically being a very high-level wrapper around it - if that is even possible without large cutbacks.

Question is if Khronos sees “keeping OpenGL up-to-date” only as adding new (Vulkan-) functionality to OpenGL 4, or actually releasing new major versions as well. If there is no major changes anymore then learning/teaching or making new projects in OpenGL could be seen as riding a dead horse. On the other hand of Khronos clearly commits to future OpenGL Releases, people will see OpenGL for what it will be: a high-level alternative to Vulkan for any application whose rendering is not CPU-heavy. In that case I would also expect the driver support to be good.[/QUOTE]

The only funtionality that I can see as viable to “backport” is support for SPIR. That would enable offline shader compilation. Otherwise Vulkan is about simplification and removal of “dead weight” funtionality.
I see GLsupport as a library layer ontop of VK. GL driver development is complicated and expensive. A combatibility library is basically “forever support”.

Actually, there is one other thing OpenGL could take from Vulkan: descriptor sets/layouts/etc. It would effectively replace numerous binding calls with a single descriptor set bind. And (besides adding a second way to assign resources to shaders), there’s no real downside to this from an OpenGL perspective.

I bring this up because I noticed that SPIR-V has some language for associating variables with descriptor indices (as well as standard OpenGL indices). It would be really handy if OpenGL and Vulkan could use the same layout technology. Oh sure, you’d still have the OpenGL ‘binding’ and ‘location’ there, if you wanted to compile a SPIR-V shader that was compiled for pre-4.6 OpenGL.

But SPIR-V generated for consumption by Vulkan would likely not have OpenGL’s binding indices. So OpenGL should just use Vulkan’s way.

It also would probably help performance. And it would deal with the bindless_texture issue (specifically, that Intel hardware can implement conceptually bindless textures, but not the way ARB_bindless_texture requires it). With unbounded descriptor sets, you have the effects of bindless without that particular implementation. And you don’t have to deal with texture residency; whatever is in the descriptor set that’s currently in use will be made resident by the implementation when you bind the descriptor set.

Lets think about it this way

When do people update their hardware or feel they need to?
When the hardware cannot handle their favourite applications, games, CAD, OS, …etc.

The hardware must support the applications they use, otherwise they will not buy it. The hardware vendors (also driver implementers) want to sell, and therefore they have to keep supporting major applications in use. Otherwise their hardware will not find market except for the nerd test labs. In other words, the hardware (including driver) is driven by software.

How many major applications uses only OpenGL? You count.

Are they going to re-write their rendering path to use a different API and abandon a working API? They may add a new rendering path or depends on how they abstracted their rendering engine…it could be a simple plugin.

So unless the software vendors found something marginally better and it can significantly make a difference in terms of performance, stability, reliability, development resources, and ease of maintenance, they will not going to switch to it or even try waste time implementing a new graphics path.

Personally I don’t believe OpenGL is replaceable within the next 100 years. It’s like trying to replace C++ by inventing Java, C#, Visual Basic…

:wink:

[QUOTE=gloptus;31463]When do people update their hardware or feel they need to?
When the hardware cannot handle their favourite applications, games, CAD, OS, …etc.[/quote]

This question is irrelevant, as Vulkan will not require new hardware. Or at least, it won’t require anything (in the desktop space) that hasn’t been available for 3-4 years.

Not many. At least, not in the desktop space.

Of course, the question is loaded, as what constitutes a “major” application is up for debate. But among the big desktop applications, not many work with OpenGL at all, let alone exclusively. Even among major graphics applications, things like Adobe’s studio, 3DS Max, etc, quite a few of them have non-OpenGL backends.

OpenGL-only applications are generally quite rare.

Nobody is suggesting they will. Indeed, the very first post made it clear that this obviously won’t happen.

The question is not “Will OpenGL die the day Vulkan comes out?” The question is “What is the scope of the ‘danger’ that Vulkan poses to OpenGL’s existence?”

Your statement contradicts existing facts. As has been stated many times, several "software vendors " are not merely planning to use Vulkan, they’re writing and testing Vulkan code. They are most assuredly “implementing a new graphics path”.

So it would seem that what you consider reasons for vendors to switch is not inline with what actual vendors believe. Just because those “software vendors” aren’t your kind of vendors doesn’t mean they don’t exist or are unimportant. And just because they don’t have your priorities doesn’t mean that their priorities don’t exist or are rare.

For example, Valve isn’t particularly concerned about “ease of maintenance”. Unlike CAD developers, they hire a whole team of graphics programmers to keep their graphics engine on the cutting edge, in terms of features and performance. For them, profiling, performance, and functionality are far more vital.

Those are their priorities. And they are no less valid than yours.

You have an over-developed sense of OpenGL’s importance. Sooner or later, OpenGL is going to effectively die. The best-case scenario for you is that, after 7-10 years, IHVs will sponsor an open-source project that implements OpenGL in terms of Vulkan or D3D12 or whatever. And after extensive testing, they replace their OpenGL implementations with that.

And thus, the IHVs will wash their hands of it. Oh, people can still use it if they want. And for old applications that aren’t being supported, or developers of applications who don’t feel like actually developing their applications, they’ll have to use that.

OpenGL will simply be a mid-level rendering system built on an existing API.

Also, your analogy is completely backwards. C++ is useful because it’s the lower-level language; that’s precisely why people use it today. What we’re talking about is more on the level of C vs. Fortran, when C first game out.

You are the old Fortran hand, with your thousands of lines of Fortran code, firmly convinced that Fortran will continue to exist forever. And on that note, you’re right. It’s just that it will only be used by a very tiny minority of users.

C won; Fortran lost. Fortran may not be dead, but it has been effectively marginalized.

I expect Vulkan vs. OpenGL to work the same way. They’ll compete, but as more code gets written, Vulkan and similar APIs will be what new code gets written to. A few old stalwarts will keep going with OpenGL, and the OpenGL specification itself will keep pace with new hardware. But overall, the number of direct users of the API will become decidedly insignificant.

[QUOTE=Alfonse Reinheart;31464]This question is irrelevant, as Vulkan will not require new hardware. Or at least, it won’t require anything (in the desktop space) that hasn’t been available for 3-4 years.
[/QUOTE]

It’s very relevant indeed.

Excellent. You just showed how knowledgeable you are.

No one need to suggest this if they are going to use a new API.

I cannot see any contradictions. Logic problem?

Say that again??? OpenGL is the industry standard for accelerated real-time graphics.

[QUOTE=Alfonse Reinheart;31464]
Also, your analogy is completely backwards. C++ is useful because it’s the lower-level language; that’s precisely why people use it today. What we’re talking about is more on the level of C vs. Fortran, when C first game out.

You are the old Fortran hand, with your thousands of lines of Fortran code, firmly convinced that Fortran will continue to exist forever. And on that note, you’re right. It’s just that it will only be used by a very tiny minority of users.

C won; Fortran lost. Fortran may not be dead, but it has been effectively marginalized.

I expect Vulkan vs. OpenGL to work the same way. They’ll compete, but as more code gets written, Vulkan and similar APIs will be what new code gets written to. A few old stalwarts will keep going with OpenGL, and the OpenGL specification itself will keep pace with new hardware. But overall, the number of direct users of the API will become decidedly insignificant.[/QUOTE]

Now I agree with you. My analogy is backward. I should have said: Assembly ("Vulkan) to replace C++ (OpenGL) in favour of performance.

For lack of anything better that is cross platform and quote unquote “open”.

1 Like

I explained how it was irrelevant, but you failed to show relevance. Declaring it to be relevant doesn’t make it true.

You said: “So unless the software vendors found something marginally better and it can significantly make a difference in terms of performance, stability, reliability, development resources, and ease of maintenance, they will not going to switch to it or even try waste time implementing a new graphics path.”

Reality said: “Many high-performance software vendors are rapidly adopting Vulkan and similar APIs, which do not have many of the features you cite.”

That’s a contradiction; your statement and reality cannot both be correct. And since reality is, well, real, it is your statement that must be incorrect.

That’s how logic works. If “software vendors” are supporting Vulkan and other APIs, even though those APIs do not offer the qualities you claim, then “software vendors” clearly do not need those things in order to support an API.

Bwa ha ha ha ha! Man, that’s a good one.

… oh wait, you’re serious. Let me laugh even harder: BWA HA HA HA HA!

Even among 3D modelling applications, OpenGL is not “the industry standard”, as many of them do support D3D as well. So only in exceptionally narrow fields is this statement even remotely true.

You can quote marketing slogans all you want, but reality is always correct. “The industry” does not consist only of CAD developers.

Only with regard to OpenGL ES in the mobile space is it “the industry standard”. And, as Ratchet pointed out, it’s only because it’s the only game in down, not because it’s better than the competition.

Let’s take your analogy at face value. I’m also going to pretend that you said “C” instead of “C++”, as many programmers reject C++ as a viable alternative to assembly (note: I don’t necessarily agree, but it’s a semi-widespread opinion). Whereas C is generally considered the lowest level that most reasonable programmers are willing to go.

So let’s ask a simple question. Why do most programmers, when seeking performance, choose not to reach below C? Why do they avoid directly writing assembly, except for certain very specific cases? There are many reasons, so let’s contrast these reasons against Vulkan v OpenGL.

Compilers are really smart; they are very good at taking C-like code and turning it into something efficient. They can do this for disparate architectures. It’s also generally clear from looking at C code whether or not it will perform reasonably well when compiled.

The equivalent in the OpenGL analogy to a C compiler is the OpenGL driver. So… have you actually used any OpenGL drivers of late? They’re rock stupid. They are terrible at taking rendering commands and turning them into something efficient. Even ignoring the minefield of bugs that most drivers contain, getting optimal performance often requires doing things a certain way. And OpenGL as an API makes it very difficult to know what that way is. Small batches, for example; it’s certainly not apparent why that’s a bad idea, if you just look at the API.

C is well-designed for efficient compilation. It gives the programmer high-ish-level constructs that they can build on, while not over-burdening them by abstracting the machine too far. The constructs it provides are useful, but the abstraction is close enough to the metal that little if any performance is lost.

Back to OpenGL. It is not well designed in this regard. The old glBegin/End stuff is absolutely atrocious, an abstraction that serves only to inhibit performance. Even buffer objects and texture objects are high enough levels of abstraction that they usually cause problems. Many drivers often have to shuffle buffers and textures around in memory, just to find the most efficient memory pool for it based on how you use them. Drivers sometimes have to readjust a texture’s format if you write to it with an FBO, since not all formats are able to be used as render-targets. And there are plenty of other places where the abstraction gets in the way of performance.

Different CPU architectures are different, and assembly that can be incredibly fast on one architecture may be excessively slow on a different one, even if both CPUs consume the same instruction set. As such, compilers are able to take high-level constructs and compile them to be most efficient for the expected platform. And the C programmer doesn’t have to be burdened to know these limitations.

Back to OpenGL. Here, the analogy works a bit better, as the buffer/texture memory abstraction allows different implementations to have different arenas of memory, without burdening the OpenGL programmer with knowing about them.

However, if performance is an issue… the OpenGL programmer must know about them anyway. Take unified memory architectures, for example. I can develop a rendering algorithm that is fast on these architectures, because I know that mapped pointer access to buffers will be basically no different from just (uncached) CPU memory. I know that mapping the pointer will likely be free in cost, and I can build my entire algorithm based on that knowledge.

But on multi-tiered memory architectures, that algorithm may become unusably slow. Under Vulkan, I’d be able to tell which hardware I’m working with and adapt my algorithm accordingly (perhaps turning that feature off on inappropriate hardware). With OpenGL, the best I can do is query the vendor string and only active it based on the results. Hardly a portable solution, if someone comes out with a new unified memory architecture processor that I’m not aware of.

So while your analogy might seem to hold, under any significant examination, they are not analogous at all. That’s why the Fortran v C parallel worked much better. Fortran is the higher-level; its abstractions inhibit the user or otherwise make certain performance gains impossible. C gets out of the programmers way, and puts abstractions in place that are just high enough to be usable, but no higher.

[QUOTE=Alfonse Reinheart;31467]I explained how it was irrelevant, but you failed to show relevance. Declaring it to be relevant doesn’t make it true.

You said: “So unless the software vendors found something marginally better and it can significantly make a difference in terms of performance, stability, reliability, development resources, and ease of maintenance, they will not going to switch to it or even try waste time implementing a new graphics path.”

Reality said: “Many high-performance software vendors are rapidly adopting Vulkan and similar APIs, which do not have many of the features you cite.”

That’s a contradiction; your statement and reality cannot both be correct. And since reality is, well, real, it is your statement that must be incorrect.

That’s how logic works. If “software vendors” are supporting Vulkan and other APIs, even though those APIs do not offer the qualities you claim, then “software vendors” clearly do not need those things in order to support an API.

Bwa ha ha ha ha! Man, that’s a good one.

… oh wait, you’re serious. Let me laugh even harder: BWA HA HA HA HA!

Even among 3D modelling applications, OpenGL is not “the industry standard”, as many of them do support D3D as well. So only in exceptionally narrow fields is this statement even remotely true.

You can quote marketing slogans all you want, but reality is always correct. “The industry” does not consist only of CAD developers.

Only with regard to OpenGL ES in the mobile space is it “the industry standard”. And, as Ratchet pointed out, it’s only because it’s the only game in down, not because it’s better than the competition.

Let’s take your analogy at face value. I’m also going to pretend that you said “C” instead of “C++”, as many programmers reject C++ as a viable alternative to assembly (note: I don’t necessarily agree, but it’s a semi-widespread opinion). Whereas C is generally considered the lowest level that most reasonable programmers are willing to go.

So let’s ask a simple question. Why do most programmers, when seeking performance, choose not to reach below C? Why do they avoid directly writing assembly, except for certain very specific cases? There are many reasons, so let’s contrast these reasons against Vulkan v OpenGL.

Compilers are really smart; they are very good at taking C-like code and turning it into something efficient. They can do this for disparate architectures. It’s also generally clear from looking at C code whether or not it will perform reasonably well when compiled.

The equivalent in the OpenGL analogy to a C compiler is the OpenGL driver. So… have you actually used any OpenGL drivers of late? They’re rock stupid. They are terrible at taking rendering commands and turning them into something efficient. Even ignoring the minefield of bugs that most drivers contain, getting optimal performance often requires doing things a certain way. And OpenGL as an API makes it very difficult to know what that way is. Small batches, for example; it’s certainly not apparent why that’s a bad idea, if you just look at the API.

C is well-designed for efficient compilation. It gives the programmer high-ish-level constructs that they can build on, while not over-burdening them by abstracting the machine too far. The constructs it provides are useful, but the abstraction is close enough to the metal that little if any performance is lost.

Back to OpenGL. It is not well designed in this regard. The old glBegin/End stuff is absolutely atrocious, an abstraction that serves only to inhibit performance. Even buffer objects and texture objects are high enough levels of abstraction that they usually cause problems. Many drivers often have to shuffle buffers and textures around in memory, just to find the most efficient memory pool for it based on how you use them. Drivers sometimes have to readjust a texture’s format if you write to it with an FBO, since not all formats are able to be used as render-targets. And there are plenty of other places where the abstraction gets in the way of performance.

Different CPU architectures are different, and assembly that can be incredibly fast on one architecture may be excessively slow on a different one, even if both CPUs consume the same instruction set. As such, compilers are able to take high-level constructs and compile them to be most efficient for the expected platform. And the C programmer doesn’t have to be burdened to know these limitations.

Back to OpenGL. Here, the analogy works a bit better, as the buffer/texture memory abstraction allows different implementations to have different arenas of memory, without burdening the OpenGL programmer with knowing about them.

However, if performance is an issue… the OpenGL programmer must know about them anyway. Take unified memory architectures, for example. I can develop a rendering algorithm that is fast on these architectures, because I know that mapped pointer access to buffers will be basically no different from just (uncached) CPU memory. I know that mapping the pointer will likely be free in cost, and I can build my entire algorithm based on that knowledge.

But on multi-tiered memory architectures, that algorithm may become unusably slow. Under Vulkan, I’d be able to tell which hardware I’m working with and adapt my algorithm accordingly (perhaps turning that feature off on inappropriate hardware). With OpenGL, the best I can do is query the vendor string and only active it based on the results. Hardly a portable solution, if someone comes out with a new unified memory architecture processor that I’m not aware of.

So while your analogy might seem to hold, under any significant examination, they are not analogous at all. That’s why the Fortran v C parallel worked much better. Fortran is the higher-level; its abstractions inhibit the user or otherwise make certain performance gains impossible. C gets out of the programmers way, and puts abstractions in place that are just high enough to be usable, but no higher.[/QUOTE]

My final words:

You are full of it. And you are the troll of the OpenGL forums as some one said. You always try to argue and manipulate facts with maybe “nice” sentences but they are far away from reality.

Direct3D will die. No Direct3D 12. “Vulkan” will never see the light.

OpenGL will role as the industry leading standard of accelerated graphics for the next 100s years or until computer graphics die.

Finally,

Here’s the middle finger to everyone betting on OpenGL death!

[QUOTE=gloptus;31471]My final words:

You are full of it. And you are the troll of the OpenGL forums as some one said. You always try to argue and manipulate facts with maybe “nice” sentences but they are far away from reality.

Direct3D will die. No Direct3D 12. “Vulkan” will never see the light.

OpenGL will role as the industry leading standard of accelerated graphics for the next 100s years or until computer graphics die.

Finally,

Here’s the middle finger to everyone betting on OpenGL death![/QUOTE]
Instead of insulting Alfonse (pretty much argumentum ad hominem) shouldn’t you rather counter his arguments with facts and reason? Since you failed to do so, it looks to the observer as he is right and your ego can’t handle this fact so you got all attacky.
Also he pretty much he said that OpenGL is not gonna die as long as IHVs continue to support it which is likely the case (this was all said before in this thread). Additionally nobody knows the future.

[QUOTE=Ident_;31473]Instead of insulting Alfonse (pretty much argumentum ad hominem) shouldn’t you rather counter his arguments with facts and reason? Since you failed to do so, it looks to the observer as he is right and your ego can’t handle this fact so you got all attacky.
Also he pretty much he said that OpenGL is not gonna die as long as IHVs continue to support it which is likely the case (this was all said before in this thread). Additionally nobody knows the future.[/QUOTE]

First the insult was not aimed at “Alfonse” personally, it’s more like that Linux nerd who gave it to NVIDIA. :wink:

Trying to argue with “Alfonse” is pointless and waste of time. He keeps objecting whatever others say and with embedded indirect insults, as he’s professional at doing it.

I provided facts but he did not listen. Besides who is that person in the world to decide whether OpenGL gonna be replaced or not? Does his opinion matter that much to any??? There’s an industry that determines the fate of every technology.

Never say “My final words” unless you’re willing to let them be final. Otherwise, you’re just pretending to storm out of a room in a huff, to distract from the conversation at hand.

Ahem: “You are full of it. And you are the troll of the OpenGL forums as some one said.”

I fail to see how that could be anything other than personal. Since it was clearly addressed at me. Personally.

The whole “middle finger” thing wasn’t insulting; it was more confusing. Because you clearly confused my position of “I find that your argument that OpenGL will outlive Vulkan makes absolutely no sense and is based on an overly narrow view of reality” with “I think OpenGL will be dead in 2 years”.

Arguing against your argument doesn’t mean I’m arguing for the exact opposite. OpenGL will be around for a while yet.

But not for the reasons you claim.

Well, yes. False information should be corrected, so as not to mislead others into thinking that it’s true. False or malformed reasoning should be demonstrated as being such, so as not to persuade others.

I only object to things that I find don’t make sense or contradict established reality.

I did listen. As evidence for that, I responded to most of your facts.

By refuting them. Thus proving that what you provided were not facts at all.

The reason my posts are so long is that I back up my opinions with facts and reasoning. This allows people the freedom to evaluate my facts and reasoning themselves, to decide if my opinion is reasonable.

Yes there is. What you seem to consistently misunderstand is how large that industry really is. And the relative importance of the various participants within it.