openGL is no longer required

What I notice is that OpenGL works less and less. I know almost no programmers who use OpenGL. And what’s also very strange is why OpenGL isn’t developed further, it’s been stuck at 4.6 for years.
Well, I’ve heard that Vulkan is a successor, but it’s very complicated. OpenGL is a lot easier to learn.
Why is this so?
OpenGL is a very proven thing.

OpenGL’s model of how to interact with GPUs has become quite divorced from how GPUs actually want to work. This was happening well before command buffer APIs like Vulkan came onto the scene. But it has reached the point where OpenGL’s abstraction is so disjointed from GPU functionality that some GPU functionality simply cannot be exposed to the API.

OpenGL is an inherently synchronous execution model; it fakes asynchronous operation by hiding results in the server. So long as you don’t do anything that reads previously modified information, you get async execution. But this model just doesn’t work for some things.

In particular, ray tracing. Making ray tracing on a GPU work under the model you see in D3D and Vulkan is pretty much impossible under OpenGL’s abstraction. You need command buffers and explicit synchronization to make this functionality viable. And trying to shoehorn that into OpenGL is something even NVIDIA kinda failed at doing.

OpenGL is “proven” for a model of GPU behavior that doesn’t exist anymore.

Hello.
I’m a developer using OpenGL in my projects, because it’s the first and only graphics API that I know right now and because it is good enough for me. I don’t need the highest performance possible from the newest graphics cards.
I’ve heard many times people saying that OpenGL is no longer used or is used less and less or that is dead. And yes, I agree that it’s successor, Vulkan, is much better and it looks like it has a long and bright future. My concern is what do I do with my own projects using OpenGL and also with many other projects that might still depend on OpenGL. I already put a lot of effort in learning OpenGL and even though learning Vulkan is very appealing to me and I consider doing, I feel very bad throwing away lots of good and working OpenGL code and replacing it with Vulkan.
I’m not an expert in graphics cards technology and I don’t know how the future might look like. I only know that lots of people just say things. My question is: should I ditch OpenGL now, if I want my projects to live in the next 10+ years, or OpenGL is not really going away that soon as people say? I only want some mere advice.

OpenGL is still fine for certain use cases. I’ve recently done a bunch of coding with it, it was pleasant to use, and areas where I missed some more explicit control were made up for by other areas where I got the benefit of it’s simpler abstractions.

The main benefit of OpenGL in 2024 is precisely that it is so old and hasn’t been updated for so long. It’s become a stable and mature platform to code for (yes, even on Intel graphics), and if you just need to whip up some 3D code without having to worry about design considerations such as planning things out in advance, and if the creaking object model and occasional baroque parts of the API are something you can deal with (or that don’t matter so much for the code you’re writing), it still works quite well.

There’s a sufficiently huge legacy OpenGL codebase that it’s not going to die any time soon.

I think as long as glBegin and glEnd work, there won’t be any fears about OpenGL 4.6.

As with extensions, are there any more innovations?

I don’t know about you, but I personally never cared about or used legacy OpenGL, with fixed function pipeline. What OpenGL 4.6 or even 4.3 have to offer are enough for me.

Alfonse mentioned about ray tracing. I don’t personally need ray tracing right now either. I always thought that OpenGL has its own advantages compared to other APIs, mainly its ease of use. Or is it too big of a burden for graphics vendors to implement OpenGL in their drivers, because of its old age and abstraction?

I don’t know about you, but I personally never cared about or used legacy OpenGL, with fixed function pipeline. What OpenGL 4.6 or even 4.3 have to offer are enough for me.

When I took my first steps with OpenGL, you couldn’t do anything else.
Back then with Win98 and NT4 there was nothing other than OpenGL 1.x.

Alfonse mentioned about ray tracing. I don’t personally need ray tracing right now either. I always thought that OpenGL has its own advantages compared to other APIs, mainly its ease of use. Or is it too big of a burden for graphics vendors to implement OpenGL in their drivers, because of its old age and abstraction?

I have to find out how Vulkan can handle ray tracing. As far as I know, Vulkan also uses GLSL 4.6. But I could be wrong.

Vulkan doesn’t use any GLSL versions. It uses SPIR-V. There’s a shader compiler for a dialect of GLSL which can generate SPIR-V suitable for consumption by Vulkan, though the Vulkan dialect of GLSL is incompatible with GLSL meant for OpenGL.

The Vulkan ray tracing functionality requires a version of SPIR-V that also incorporates ray tracing functionality. And there’s a GLSL extension that incorporates enough syntax to access it through Vulkan’s GLSL dialect.

The primary issue that makes ray tracing basically impossible for OpenGL is that, fundamentally, you’re talking about a scene. OpenGL doesn’t know anything beyond the current state and the next rendering call. There’s a lot of explicit stuff that goes on in the implementation which OpenGL’s synchronous model hides from you. But ray tracing basically breaks this, requiring the user to actually deal with some of that explicit stuff. Which OpenGL has no mechanisms to actually do.

If you don’t care for Vulkan’s complexity, migrate to WebGPU / rust lib wgpu (same API, but latter is for native). It’s modernised, cross-platform (Android and iOS native reputedly on the cards) and works extremely well.

I find WebGPU / WGSL overall cleaner and simpler to use than OpenGL 3, WebGL 2.0, Unity + HLSL etc. which I used in the past. You can try it out in the latest Chrome and Firefox Nightly (with some config).

If you don’t want to use wgpu itself in native code, WebGPU apps are apparently cross-platform portable via e.g. Electron which uses Chromium to supply the majority of underlying logic.

Here’s a compute / render sample, among others. I’m currently doing CPU + GPU compute in parallel using worker threads (WebGPU running from inside a web worker) and combining results; also using WASM modules compiled for web. The whole experience is quite gratifying, seeing how far we’ve come these last few years.

1 Like

@ Nick_Wiggill

If you don’t care for Vulkan’s complexity, migrate to WebGPU / rust lib wgpu (same API, but latter is for native). It’s modernised, cross-platform (Android and iOS native reputedly on the cards) and works extremely well.

I find WebGPU / WGSL overall cleaner and simpler to use than OpenGL 3, WebGL 2.0, Unity + HLSL etc. which I used in the past. You can try it out in the latest Chrome and Firefox Nightly (with some config).

If you don’t want to use wgpu itself in native code, WebGPU apps are apparently cross-platform portable via e.g. Electron which uses Chromium to supply the majority of underlying logic.

Here’s a compute / render sample, among others. I’m currently doing CPU + GPU compute in parallel using worker threads (WebGPU running from inside a web worker) and combining results; also using WASM modules compiled for web. The whole experience is quite gratifying, seeing how far we’ve come these last few years.

As the name suggests, it looks a lot like WEB. I came across the following demos.

But the preview window remains white.

SPIR-V is known with OpenGL 4.6.
If I understand you correctly, Vulkan knows the GLSL 4.6 language and you can use other shader languages that OpenGL 4.6 doesn’t know?

Vulkan doesn’t use any GLSL versions.

When I look in here, this looks a lot like GLSL.

All works for me in Chrome and FF Nightly. Your browser probably isn’t configured. Press CTRL-Shift-I and inspect the console. Better yet, enable it:

You can enable the chrome://flags/#enable-unsafe-webgpu flag and restart Chrome.

To enable WebGPU in Firefox Nightly : Type about:config in the address bar, set dom.webgpu .enabled and gfx.webgpu .force-enabled to true

I have no idea what WEB is. All I know is wgpu is what most of the Rust game libraries and frameworks are now using. WebGPU has a future, since Rust is clearly strongly on the rise.

But what does that have to do with ray tracing?

… so what?

Vulkan, the API, doesn’t use GLSL. It uses SPIR-V. You (or any tutorial or application) can take appropriate GLSL and compile it into SPIR-V, which you then feed to Vulkan.

But that doesn’t mean Vulkan is using GLSL. It’s still using SPIR-V.

To enable WebGPU in Firefox Nightly : Type about:config in the address bar, set dom.webgpu .enabled and gfx.webgpu .force-enabled to true

I changed it, restarted Chrome, but it still doesn’t work.
Could it be because I only have an Intel graphics card?
Or even the OS?
I’m using Linux Mint 64bit.

To enable WebGPU in Firefox Nightly : Type about:config in the address bar, set dom.webgpu .enabled and gfx.webgpu .force-enabled to true

I wanted to try this too, but I can’t find the option gfx. webgpu .force.
Regardless of whether it’s stable or nightly.

I’m not sure about Linux, I’m on Windows. Intel won’t make a difference – on my machine, WebGPU runs on both my integrated Intel card and my RTX.

WebGPU’s reasonably mature, so if you figure out the switches to get it running, you should be good.

You could also try Chrome Canary which may have it enabled by default… Otherwise just google around for how to get it working in Linux, should not be too hard to find the latest way to get it set up.

Hopefully I understand it correctly now. SPIR-V can be compared to an EXE and GLSL is a compiler language like C/C++/Pascal?


Nick_Wiggill

2h

I’m not sure about Linux, I’m on Windows. Intel won’t make a difference – on my machine, WebGPU runs on both my integrated Intel card and my RTX.

WebGPU’s reasonably mature, so if you figure out the switches to get it running, you should be good.

You could also try Chrome Canary which may have it enabled by default… Otherwise just google around for how to get it working in Linux, should not be too hard to find the latest way to get it set up.

Apparently Chrome for Linux doesn’t support it yet.

What’s strange is that you can already set the flag.

I seem to remember that this was also the case with WebGL, but now it runs on almost all browsers.

I also tried the following

Bildschirmfoto vom 2024-05-29 17-05-43

I tried it today in my office. It’s running Windows 10. Without me having to change anything, the examples ran straight away with normal Chrome.
I can well imagine that it will soon run straight away with Linux too. It was no different with WebGL.