It is time for a modern shading language based on C++

When looking at Vulkan 1.2 many issues that i originally had with Vulkan are solved. It really feels like a modern API now that was designed by analysing how modern GPUs work. However, when it comes to writing the shaders things have not really changed.
Writing shaders using GLSL just feels like an anachronism, i just cannot belief Vulkan users are really happy with this situation. What is even more astonishing to me is that Vulkan with SPIR-V was designed totally independent of GLSL and would be ready for a much more modern shading language - but there is none from Khronos.
Recently i read something about the Circle C++ frontend compiler from Sean Baxter and i couldn’t belief what feature set it has (and i could not belief that this was only developed by a single person). It supports GLSL that was enriched with modern C++ support - basically exactly what i was always waiting for.
Hopefully Khronos is right now looking into this topic and comes up with a modern shading language that basically works in such a way as the Circle C++ compiler - combine GLSL with modern C++.
It would be fantastic if the Vulkan SDK would contain such C++ to spir-v compiler for the generation of spir-v binary code for vulkan, but using modern C++ shader code as input.

What modern C++ features exactly are you missing? Sometimes I think it even has too many C features, such as break and continue.

When talking of “modern C++”, my highest prio is clearly on “C++” and only secondary on “modern”.
First, i miss not having C++ at all. Not even basic features like classes and templates.
I am fully aware that GPUs have restrictions concerning various dynamic features, but even compile time C++ (static C++) gives a lot of productivity gains.
In short i would say basically everything in modern C++ that allows code to be compiled to SPIR-V should be supported.
Independent of this - if your question was more related to what modern C++ features i currently use most, i would say lambda functions and atomic_ref.

It should be noted that, while the Khronos GPU memory model used by Vulkan and SPIR-V is based on C++'s memory model, they have some several points of divergence. The one I want to focus on is that in C++, objects are atomic or not atomic. In The Khronos memory model, access to objects are atomic or not atomic. That is, you can access some address atomically in one location and not in another. C++ doesn’t really have that; either the piece of memory is atomic or it isn’t.

atomic_ref is kind of a hack on top of the C++ memory model, a way to make an existing non-atomic object into an atomic one for a period of time. The Khronos memory model needs no such hack, so it’s probably best to not use it within that context.

As to the total substance of your post, the entire point of SPIR-V is for Khronos to not have to maintain a high-level shading language. It exists to allow people to create languages that work best for themselves, and compile them into an easily-digestible intermediary for actual use.

GLSL-to-SPIR-V is a nice thing to have, and its good that they’re maintaining the compiler for it. But it really isn’t Khronos’s job to create and maintain a new language in addition to that.

Unluckily, this perfectly explains why we are still here today with old C based shader languages as GLSL.

However, the option in the Circle C++ compiler to generate SPIR-V code from GLSL shaders seems to me to be exactly what Khronos had in mind when creating SPIR-V in this sense. Someome has created a compiler for a GLSL extended C++ to generate SPIR-V intermediary code.
Have i missed something or was this C++ circle shader compiler really never mentioned by Khronos in any form so far?

If we were “here today with” that, you wouldn’t have your Circle C++ extension for generating SPIR-V shaders, would we? You seem to be complaining that the intermediate language is being used as an intermediate language.

Does it need to be? Outside of advertising, would it meaningfully change anything if Khronos endorsed this Circle thing?

Not at all, of course was the introduction of SPIR-V as internal shader format a brilliant idea. I am complaining that only little was done in addition to this introduction. Right now still basically all shaders on Windows/Linux are written in GLSL or HLSL, no?

Well at least for me it was new to learn that someone has created a C++ frontend compiler that is a great step forward compared to GLSL/HLSL. I was always wondering when something like this comes up. It would be great if the other C++ compilers like Clang/MSVC/GCC/ICC would also support such a C++ shader compiler mode in the future.

Then you might also be interested in these as well.

FWIW, Rust is a much cleaner, compiled, high-level language more amenable to parallelization than the complete mess C++ has turned into:

1 Like

What problems would this actually solve?

The main real-world problems in shader management today are combinatorial explosion, compile times, and run-time patching (recompiling) - what would this actually do to solve any of these?

On the surface it seems like it offers not much other than giving more syntactical sugar to those who like that kind of thing.

Meantime it also seems that it would actually cause even more problems, such as making coexistence of both GLSL and HLSL code bases in the same project more difficult.

I’m not even sure that a shading language even needs to be this expressive. Existing shading languages are quite capable of meeting their use cases.


I don’t really have that much of an issue with using GLSL myself, though it does have annoying limitations at times (lack of enums is my main annoyance). And it seems to me that having the whole of C++ available would quickly lead to very bad performance surprises for many people when they try to run it on a GPU.

However it seems like you’re mainly thinking of traditional vertex / pixel shading. I use a lot of compute shaders myself, where optimizaiton is actually not an issue and I just want something that runs on the GPU - I’m expecting to program it like any CPU, just on the graphics card where the video memory is near and all. So in terms of expressiveness, I think compute shaders have a lot of room for improvement, and pixel shaders would benefit as well if you’re going outside the box a bit (like with SDF raymarching).

That’s just an unreasonable expectation. The execution environment of a GPU is so fundamentally different from that of a CPU that it’s simply not reasonable to expect to be able to code on them in the exact same way. Algorithms that are fast on the CPU will not necessarily be fast on the GPU and vice-versa.

While some languages will have aspects that are more amenable to GPU implementations than others, you as the programmer are simply going to have to write different code for the different execution environments.

Yeah, I was simplifying things, of course it’s programmed taking into account that it’ll run on a GPU. However I wouldn’t mind having some extra organization features, like enums as I mentioned. I was just responding to “I’m not even sure that a shading language even needs to be this expressive.”

There’s some things C++ does that probably wouldn’t be reasonable to have in shaders, like function pointers, virtual methods, direct pointer manipulation and all that. However there’s some features that could be nice to have, like having classes with non-virtual methods.

Anyway, I’m not really arguing that Khronos should be doing all this. I was just reacting to the idea that shading languages don’t need to be as “expressive” as others. Compute shaders are a general purpose tool and it makes sense to have just as much freedom in choosing the right level of language features as for programming a CPU, within what is physically possible to run on the hardware.

1 Like

Well CUDA is C++ and very successful at that. Here i am not aware than anyone complains that C++ is a problem. Contrary, i would even say it was one of the keystones that CUDA became so dominant for compute.
Right now, when i know that it is sufficient that a software only has to run on NVIDIA GPUs i use Vulkan as graphics API and CUDA for compute. The Vulkan-CUDA interop is really brilliant, so in this context CUDA basically can be seen as a perfect compute shader for Vulkan.
Unluckily this approach cannot be used for software that has to support all modern GPUs. Here having to use Vulkan compute really hurts, knowing how much simpler things are when one can use (CUDA) C++ instead.

Well, it would not help with compile times, but it definitely helps a lot to keep the code simple when having to deal with combinatorial explosion. Once templates are available, from my experience it helps a lot being able to work with templates and template specialization to address such cases elegantly.

Interesting. I haven’t looked into CUDA at all because it’s a non-starter for my use case (I can’t base a game on this if it won’t run on all consumer GPUs). I’ve read up on its C++ support and restrictions and it seems to support a lot more of the language than I assumed.

@Alfonse_Reinheart Do you have any thoughts on why CUDA can support basically all of C++ while my comment on “expecting to program the GPU like any CPU” sounded like an unreasonable expectation to you? I don’t mean any offense at all, just trying to understand the difference between what CUDA is doing and what Vulkan Compute is aiming for.

From my point of view, a language with CUDA-like capabilities on the GPU side existing as part of Vulkan would be an absolutely killer feature, but maybe I don’t understand the full picture of what consequences this has (just reading the CUDA docs it does seem like a huge undertaking for one).

Because the two are not contradictory.

Yes, C++ can run on GPUs. But you weren’t asking for that. You said, “I’m expecting to program it like any CPU”. What language you use is just the beginning of how to program something. The differences between a GPU and a CPU are not just how “near” video memory is. And languages by and large cannot hide those things from you.

So regardless of what language gets used, you’re not going to just be able to program a GPU like a CPU.

Despite the marketing slogan, Vulkan compute’s job is not to be some kind of end-all-be-all for compute systems. It’s meant to do exactly what OpenGL compute does: create a reasonable way for graphics applications to create GPU tasks that are not themselves rendering commands, but are ultimately about rendering. These would be things like visibility culling, building indirect rendering command structures, nowadays maybe some ray tracing acceleration data structure management, etc. These are tasks that are closely associated with a rendering operation, such that the cost of interop with some other API just isn’t worth it.

The differences between dedicated compute APIs (CUDA and OpenCL) and graphics APIs that offer compute functionality (D3D, OpenGL, Vulkan) are legion. These differences go far beyond what language they use for their shaders (especially since modern OpenCL uses SPIR-V just like Vulkan).

Alright, thanks for your answer. Yeah I was certainly being too vague with “programming the GPU like any CPU”, I understand that’s not really the case. My comment about having the video memory nearby was just about executing some operations that don’t really need to be particularly optimized in terms of computation, but simply benefit from avoiding a roundtrip through the CPU to do something simple.

In some cases having a sequence of operations all on GPU has nice benefits regardless if the workload of the operation itself is a good fit for a GPU.

The strength of something like Circle for Vulkan is that it is a single-source programming model, in the sense that you have in the same program and language both the host code and the graphics code.
This is a common programming model in the compute acceleration world (SYCL in the Khronos realm but also CUDA, OpenMP…) because it allows type-safety between host and device code, allows better global optimizations by the compiler because of the single-program view.
We had some discussion about whether it would be interesting to have such a single-source approach mixing for example SYCL and Vulkan for graphics.
But for now we have not found the graphics community to pursue this yet.
In the graphics world, it looks like a Frankenstein’s design pattern with a host program using an API to interact with a graphics device executing some graphics program written in some foreign languages without any type-safety guarantee between the host code and the device code is just the normal way to go for decades.
Perhaps the Circle approach can popularize a single-source approach across the graphics community?
At least having the choice between different programming paradigms, languages, etc. seems good. :slight_smile:

1 Like

I think one reason that single-source doesn’t make sense for game engines in particular is that they tend to have shaders built by artists, not programmers. Single source programming has no advantage if you have two virtually independent groups of people using the different programming paradigms. Yes, a programmer can be an artist or vice-versa, but even then, each domain is fairly separate from the others.

It would be great if Vulkan shaders could be created in a similar way as kernels in SYCL or CUDA using a single source approach. This would be most likely the best solution at all.
However, it would already be a great step if more and more programming languages would be supported to create shaders that just compile to a SPIR-V binary file. Everything is better as the situation right now.
Those people that are totally happy with GLSL can still use it if the want, i am not proposing to take that away. However, for people that are used to program in C++ with SYCl or CUDA programming in a language like GLSL just feels like being sent back to the stone age :slight_smile: