Vulkan: for beginners?

Analogies are dangerous things. Allow me to demonstrate where yours fails with another CPU-to-GPU analogy (only one that works).

A single core of a CPU is not a single, monolithic processor. It is a pipelined, multi-processor computational device. Each opcode executes in multiple stages within the pipeline. Different opcodes can take different paths through the pipeline, depending on what the opcode is.

What this ultimately means is that multiple opcodes can be in the middle of being processed at the same time. Since there are different pathways through the opcode processing pipeline, you could have quite a few different opcodes going at once. So even on a device that is “single threaded” at the assembly level, it is hardly “single threaded” at the level of the processing system.

However, most programs (as well as the C and C++ standards before 2011) were single-threaded. Therefore, the CPU could not expose programs to this internal multi-processing.

Therefore, CPUs developed a number of systems to ensure that dependent opcodes don’t encounter problems. They turn a multi-threaded system into a single-threaded one. If one opcode depends on the results of another, that the opcode doesn’t proceed too far into the pipeline until the dependent opcode is finished and its results are available.

Many CPUs also have systems in place to execute instructions out-of-order; that way, if one opcode is stalled due to dependencies, it doesn’t stall the pipeline. Now, the out-of-order thing sounds a bit silly; why not just generate the assembly in the right order to begin with? Well, one reason would be that people are throwing pre-compiled binaries around that were compiled 25 years ago for ancient 286 machines. If you want to ensure that these programs don’t perform horribly on your modern CPU, out-of-order execution is a good idea. There are other reasons too, but that one is critically important for this analogy. Why?

Because GPUs have a very similar architecture. They have long processing pipelines. “Opcodes” (aka: rendering calls) sometimes depend on others, and they must not be executed until the dependent “opcode” is out of the pipeline. And so forth.

But there’s one big difference between a CPU and a GPU. Remember all those systems I talked about for out-of-order execution? Or the system that stops one opcode from executing while a dependent one is in the pipe?

None of that exists for GPUs!

For reasons that are ultimately irrelevant, GPUs can’t really do that. They are far more parallel internally than CPUs, but GPUs have none of the automatic means to prevent rendering commands from stepping on each other. Who’s responsibility is that?

It’s the graphics driver, the equivalent of the “compiler” for CPU assembly. The driver’s the one who has to issue synchronization to stop dependent rendering commands from being in flight at the same time. The driver has the responsibility to clear caches, to make data properly visible to incoming rendering commands. And so forth.

It is very possible for two rendering commands to be in the graphics pipeline at the same time. This is why new draw calls do not induce a pipeline stall. Depending on the hardware, it’s theoretically possible for different commands to be rendering to different framebuffers at the same time. It’s all about how scheduling happens within the GPU.

Scheduling that, I remind you, is completely blind to dependencies.

Therefore, your premise that single command buffer operations somehow don’t need synchronization primitives is complete bunk. They don’t in OpenGL, but only because the OpenGL implementation bends over backwards to ensure that. Vulkan does not. Vulkan is as thin a wrapper around the graphics system as possible. That’s why it exists.

In fact, Sellers was quite adamant that what you claim is exactly what would not happen. And I quote: “We’re not going to track the state of a resource. It’s up to you that, when you’re rendering to a texture, and you want to go read from it, you have to tell the driver, ‘I’m done rendering to this; now make it readable.’ And then the driver will do the work right there to make the texture readable. If you get it wrong, we will render garbage, or crash.”

That doesn’t sound like the system checking “which buffers are being used in the current queue and the submitted commands by checking the active commandbuffer’s read/write needs and insert barriers as needed to ensure coherency”.

So no, the graphics queue has no idea what command buffer A or B or C does. All it knows is that they execute some commands. If there are synchronization or coherency issues, it’s up to you to detect them and compensate for them.

Even ignoring the fact that a lot of GPUs today have multiple rasterizing units (and therefore, Vulkan would not be designed to exclude their capabilities), that’s still untrue. There’s no reason why you couldn’t have two sets of triangles that have been rasterized and both have fragments in the fragment processing pipeline. Or both have ROPs active.

It would depend on the hardware, but there’s nothing conceptually preventing it for some GPUs. And thus, there’s no reason for Vulkan to forbid it.

Especially when OpenGL does not.

If the queue threatens to stall the pipeline then the driver can still insert commands from another queue in between. This can then execute without stalling.

The meta data constructed while building the command buffer will be able to say “I’m reading blocks a,b,c and writing to x,y,z”. A little bit of bookkeeping per queue will allow the driver to keep track of which ones are in progress and which ones would overlap reading and writing if submitted without a barrier inbetween.

[QUOTE=ratchet freak;31175]If the queue threatens to stall the pipeline then the driver can still insert commands from another queue in between. This can then execute without stalling.

The meta data constructed while building the command buffer will be able to say “I’m reading blocks a,b,c and writing to x,y,z”. A little bit of bookkeeping per queue will allow the driver to keep track of which ones are in progress and which ones would overlap reading and writing if submitted without a barrier inbetween.[/QUOTE]

You are making things up. You are inventing an API that nobody in this forum is discussing. This phantasmal API of yours does not exist. These features are antithetical to the fundamental ideals of the Vulkan API, the reason why it was written. The very architect of Vulkan himself has gone on record, stating as unequivocally as possible, NONE OF THIS WILL HAPPEN IN VULKAN!

But I’m sure you know the API better than the guy who invented it, right? :doh:

@Alfonse Reinheart
Bummer! I thought I’d be able to tackle Vulkan head on when it lands but your technical arguments suggest otherwise.
It seems you are well-versed on the subject of graphics programming and I’d like to take your advise of taking an OpenGL course before jumping into Vulkan. So I thought this could be a better time to start. However I’m clueless as to where to start from. Right now I’m good with C++ (or at least I think I am) and I’ve developed a couple native apps on android. I also have some basic idea on graphics maths like vectors and matrices. But your mentioning of threading, synchronisation and other bunch of crazy terms made me realise I might not be properly prepared to face Vulkan. Can you point me towards the right direction? Like what subjects or useful sites, books, resources I need to digest to be on the right track?
Aaargh! I was so damn obstinate I should be able to handle Vulkan.
Thanx in advance.

The OpenGL Wiki is where you start looking for how to get started with OpenGL.

It should also be noted that Vulkan doesn’t exist yet. And won’t exist until later this year. So even if you wanted to start with it, your time would currently be better spent learning an API that currently exists.

No sense in doing nothing just to wait on it, after all.

Thanks for sharing.

I stumbled on a site online, arcsynthesis.org/gltut/. Even though this site is not mainly about OpenGL but it offers to teach one how to become a graphics programmer. Should I start with this or should I just focus on learning OpenGL instead?

Lol. I actually found that site too, but apparently the domain expired. I found this one too:

http://in2gpu.com/opengl-3/

It seems decent. I’m also looking over the wiki that was posted up there. Really kinda disappointed the arcsynthesis one isnt up, I thought it looked pretty good.

Been reading the Mantle documentation. If the Vulkan documentation is as clear as this pdf then I don’t think a beginner would have much of an issue understanding the synchronization. Simple pictures go a long way. I’m extremely impressed with the technical writing of this document. They should hire the same people for the Vulkan docs. :smiley:

I might have to check that out, thanks for mentioning it.

The Mantle documentation explicitly says that it is not for beginners:

Due to its lower level control of memory and synchronization features, the Mantle API is targeted
at sophisticated developers. Effective use of the API requires in-depth knowledge of 3D graphics,
familiarity with the underlying hardware architecture, and capabilities of modern GPUs, as well as
an understanding of performance considerations. The proposed solution is primarily targeted at
advanced graphics programmers familiar with the game console programming environment.

Is everyone forgetting just how much nonsense and bloat has to be learned, and how many bugs defeated, to be able to use OpenGL properly? A beginner can spend all the time otherwise wasted fighting OpenGL on learning Vulkan’s fundamentals and difficulties.

When comparing the OpenGL spec to the Mantle guide, which is pretty much a spec in progress, I find the latter much nicer to read. But I have read in the OpenGL spec before, which should make it much more comprehensible to me. The difference should feel huge to someone who doesn’t have prior experience.

I have programmed some things in OpenGL. Nothing spectacular, just a text renderer, a 2D batch renderer, and a pusher for simple effect particles. Yes, I made mistakes, and yes, those did cost time, and yes, more low-level code would have made that harder. But was that really what I struggled with? Is this the main problem? NO! NO WAY IN HELL! It was abstruse inconsistencies, legacy documentation, driver bugs, and me naturally having no idea what the bloody heck was going on in the magical realms of driver and documentation authors!

The following are some of my bug experiences. Note that I never did any paid or otherwise production-used work in “modern” OpenGL, nor any systematic testing on many platforms. This is just the very tip of the iceberg I ran into when fooling around:

[ul]
[li]AMD drivers blatantly ignore std430 memory layout in some cases. But it’s anyone’s guess in which cases. The workarounds make code super messy.
[/li][li]An NVidia GLSL compiler didn’t like loops that start at 7 instead of 0. Completely equivalent code worked.
[/li][li]One version of an AMD driver decided to set uniform inputs to zero when they used the normalization feature and were shorter than four bytes. This changed depending on both driver version and requested OpenGL version.
[/li][li]Mobile NVidia cards destroy the output of my text renderer in a way I can’t make sense of. Desktop NVidia or AMD work fine; probably a bug because I’m using “buffer textures” in some rare way (I’m at a loss for swear words at this concept alone!).
[/li][li]I have no idea how to write GL 4.3 programs for Intel GPUs with drivers that should support GL 4.3. Everything works fine for 4.0, but 4.3 programs, erm, vanish. The driver must do something that completely messes up one of the abstractions I use, leaving me without any kind of error information.
[/li][li]AMD drivers randomly break the spec on invalid inputs, creating cases that will trigger error handling on NVidia but just keep rendering on AMD.
[/li][li]An NVidia GLSL compiler arbitrarily refused to link some programs, returning errors that didn’t seem sensible. At some point, it suddenly worked again. I couldn’t tell what happened and didn’t care enough to look into it given the frequency of absurdities.
[/li][li]There is a race-condition somewhere in the renderer and particle system that swaps particle textures around. I don’t think that should be possible, but hey, I must admit I’m not sure the driver schedules dispatches and draws with writes and reads the way I think it does!
[/li][/ul]

Notice the case descriptions for these bugs. The more I write, the more funny terms appear. Terms that have no meaning to the GPU, nor to my program. These misplaced concepts are a huge source for confusion. OpenGL programming often involves searching for experiences and tested best practices online; trial-and-error programming at its “best”.

[ul]
[li]Everything is guesswork. Performance implications – the goal of it all – are mostly underdefined.
[/li][li]There are a gazillion funky, redundant, specialized features, with a rough consensus how each is normally used. Comprehensive documentation is rare, driver reactions to valid “non-consensus” usage erratic. Try going through the OpenGL API and docs, reading out some of that! It’s ridiculous.
[/li][li]Consequently, drivers are buggy as hell.
[/li][li]Handling of legacy features or extensions is a mess. I once used a legacy feature of a function without knowing it, in Core Profile, and then it failed on other drivers later.
[/li][li]There are insane amounts of mutable, global state that persists between entire frames! Misinterpreting the documentation on these states is easy, and simple tests using wrong assumptions might even run… until something changes and everything breaks.
[/li][/ul]

I could go on for ages, but at this point, it’s clear that OpenGL does not work unless you are a masochistic hacker who reverse-engineers drivers. If Vulkan largely functions as specified, it will be easier than OpenGL, especially for beginners, who can omit the whole madness.

It is too easy to see the world from the eyes of an experienced OpenGL developer and forget how much it costs to get there. This is not at all the perspective of a beginner.

When comparing the OpenGL spec to the Mantle guide, which is pretty much a spec in progress, I find the latter much nicer to read. But I have read in the OpenGL spec before, which should make it much more comprehensible to me. The difference should feel huge to someone who doesn’t have prior experience.

That’s a very apples-to-oranges comparison. The Mantle guide is a programming guide and a reference. The OpenGL specification is a specification. The OpenGL spec has to spell everything out in explicit detail, while the Mantle guide does not.

For example, the Mantle guide talks about a lot of important topics as though the reader already knows what they are. It never stops to explain what a triangle strip is. It never explains how points are rasterized. It never explains how clipping and the viewport transform work. Or what the blending parameters actually do. It doesn’t say how tessellation works. I can keep going, really, as the guide talks about nothing that’s not specific to Mantle.

No beginning graphics programmer, who knows nothing at all about graphics, could read the Mantle guide and be anything but confused. They’d be looking for the part that actually explains what it means to render a triangle and find nothing at all.

You’re basically comparing a complete document to a summary; and an incomplete one at that. Of course it’s easier to read.

Don’t expect the Vulkan specification to be so easy on the eyes…

Terms that have no meaning to the GPU, nor to my program. These misplaced concepts are a huge source for confusion.

I’m curious as to what you’re referring to here. What “misplaced concepts” that “have no meaning to the GPU, nor to my program” are you talking about? Buffer textures are a real hardware construct. std430 layout is something your program cares a lot about, as it’s a standard layout for SSBOs.

If Vulkan largely functions as specified, it will be easier than OpenGL, especially for beginners, who can omit the whole madness.

Right up until they try to use memory. If Vulkan’s memory model is half as complicated as Mantle’s, they’re never going to understand it. Transitioning memory from state to state, memory allocated from different pools, formatting with textures, DMAs and/or pinning memory just to upload it. I’d say that a good 30+% of the Mantle programmers guide, outside of the reference material, was about memory and memory states. It’s all excessively complex.

The only way a beginning graphics programmer could use it is by giving them a bunch of functions and say, “look, just call all these functions, with these parameters, in this order.” And then they’re not learning. They’re just copy-and-paste coding.

Cargo cults don’t learn things.

Your problem with OpenGL seems to be the transition from “learning” to “using”. The “learning” phase, when using a book or tutorial or whatever, doesn’t involve driver bugs or documentation. The code has been tested by the person who wrote it, and any driver issues should have been sorted out. And any documentation ought to be provided by the book/tutorial.

The “using” phase involves writing code without a net, without someone telling you what to write. And that means you’re using untested code, so you can encounter driver bugs. You’re also looking for information on APIs and functionality that you didn’t learn about during the “learning” phase. And nobody ever claimed that things weren’t terrible for OpenGL in either of those domains.

When I say Vulkan will be harder to learn, I’m talking about the “learning” phase. I’m saying that materials designed from learning by someone with no knowledge of graphics (ie: beginners) will have to start far slower, be more complicated, and involve many more concepts than OpenGL learning materials. Driver bugs and external documentation issues aren’t part of this.

Once you know something, things might be better in Vulkan land. Or if you already have a good handle on graphics, Vulkan might work out better for you in the long run. But in neither case are you a beginner.

That is what the validation/debug layers are for. They will provide a better net than openGL’s old cryptic glGetError. If those are half as expressive as mantle’s (or at least according to this article) then beginners will be able to self help themselves around problems and bugs much better (provided they are not beginner programmers instead of only beginner graphic programmers).

The biggest thing that will trip up OpenGL programmers is all the global state, driver bugs and auto-included extensions. However in vulkan any state that is worked with in a API call will be passed in explicitly. For those that are experienced with OO programming this is much clearer about what is happening. Driver bugs should be less with a decent conformance program. And all extensions must be included explicitly.

[QUOTE=Alfonse Reinheart;31412]Your problem with OpenGL seems to be the transition from “learning” to “using”. The “learning” phase, when using a book or tutorial or whatever, doesn’t involve driver bugs or documentation. The code has been tested by the person who wrote it, and any driver issues should have been sorted out. And any documentation ought to be provided by the book/tutorial.

(…)

When I say Vulkan will be harder to learn, I’m talking about the “learning” phase. I’m saying that materials designed from learning by someone with no knowledge of graphics (ie: beginners) will have to start far slower, be more complicated, and involve many more concepts than OpenGL learning materials. (…)[/QUOTE]

I believe you: more concepts will be required to get a trivial program running. However, nobody learns just to learn. We can assume that learning is followed by using. If a beginner has to learn a little longer, and gets less issues on usage and a better understanding in return, this can be a good trade-off.

Learning difficulties have a lot of advantages over usage difficulties. Well-known, systematic issues are easier to deal with than arbitrary obstacles. Tutorials can cater to various target audiences, methodically explaining how to approach a task, and what potential misunderstandings to watch out for.

The important question is: which API will have been more efficient by the time the first real project is done? The winner of this comparison is what should be recommended to beginners. A typical project includes at least one round of customized usage, testing, and deployment.

[QUOTE=Alfonse Reinheart;31412]
What “misplaced concepts” that “have no meaning to the GPU, nor to my program” are you talking about? Buffer textures are a real hardware construct. std430 layout is something your program cares a lot about, as it’s a standard layout for SSBOs.[/QUOTE]

I think the “buffer texture” is a decent example; it’s just an oddball. It’s a misnomer. SSBOs are redundant with it, apart from somewhat opaque performance implications that don’t follow a standard I’d know of. The support of typing for buffer textures is bad and special, and the resulting code a potential source of bugs. It just isn’t a very good design, dragged along as legacy ballast. I don’t really want to argue about details on this, I just generally doubt that the current formulation is optimal.

To me, many OpenGL features feel like leaky, costly abstractions. This is a general problem. What is an SSBO? Or a UBO? Data can be viewed as objects or expressions, or as buffers and raw memory. OpenGL concepts tend to be at a strange point in between, where there is neither clear access to the hardware side for performance, nor a high level of abstraction to allow fast, easy programming. I want to be taught about the underlying memory properties instead, when that’s what it is really about! If Vulkan can get people there after a week or so of reading, this time investment might pay off before any first, real project is done.

The important question is: which API will have been more efficient by the time the first real project is done? The winner of this comparison is what should be recommended to beginners.

If we accept your assertion that Vulkan will be easier to function within, due to having better/more consistent driver support or whatever, the important question is this: how to most effectively get a beginner to the point where they can work with Vulkan?

Your way is to slowly walk them through Vulkan as a neophyte. But your own experiences show that it’d probably be more effective to get them to learn OpenGL first, then skip the “using OpenGL” stage and teach them how Vulkan works differently. You don’t have to teach them about the rendering pipeline and so forth through Vulkan. They already know that from their GL learning, so you can take the Mantle Programming Guide approach at that point, focusing only on the Vulkan-specific stuff.

Once a programmer stops being a beginning graphics programmer, Vulkan becomes a lot more reasonable to learn. So I say, since learning OpenGL is easier than using it, learn that as a stepping stone to learning (and using) Vulkan.

Oh, and one more thing. While Vulkan drivers will likely be a lot (lot) less complex than OpenGL drivers, don’t expect them to be perfectly smooth sailing in terms of quality of implementation. Conformance testing Vulkan is probably going to be exceedingly difficult, and while the validation layer is nice, that only proves that your code does what Vulkan requires.

The SPIR-V compilers in particular will likely be the biggest pain-point for Vulkan implementations. It would be entirely possible for them to have the same std430 layout variances that you experienced under OpenGL, as the complexity of that process didn’t change at all. Oh sure, they don’t have to compile it from a C-like language, but the front-end was never the hard part for building layouts.

I think the “buffer texture” is a decent example; it’s just an oddball. It’s a misnomer. SSBOs are redundant with it, apart from somewhat opaque performance implications that don’t follow a standard I’d know of. The support of typing for buffer textures is bad and special, and the resulting code a potential source of bugs. It just isn’t a very good design, dragged along as legacy ballast. I don’t really want to argue about details on this, I just generally doubt that the current formulation is optimal.

It’s easy to say that something isn’t optimal when you’re only interested in running on the latest-and-greatest hardware. Buffer textures existed in the OpenGL 3.x days, when SSBOs did not. If you want your program to work on the majority of hardware out there, you would stick with the more widely supported buffer textures.

So from the point of view of making an application that other people can use, buffer textures are quite optimal :wink:

Are buffer textures unusual, conceptually? Absolutely. It’s not really a “texture”, since it avoids about 90% of the useful things textures have (filtering, normalized texture coordinates, more than one dimension of data, etc).

However, it should be noted that Mantle retains the basic concept; they just call them typed memory views. So they seem to think that the basic concept of a buffer accessed with an image format is a useful thing.

Or they’re just copying D3D :wink:

To me, many OpenGL features feel like leaky, costly abstractions. This is a general problem. What is an SSBO? Or a UBO? Data can be viewed as objects or expressions, or as buffers and raw memory. OpenGL concepts tend to be at a strange point in between, where there is neither clear access to the hardware side for performance, nor a high level of abstraction to allow fast, easy programming.

Let’s assume that’s true.

Explain to me the hardware difference between a Mantle memory view and a Mantle dynamic memory view. Dynamic memory views are memory views that are set directly into the command buffer state with grCmdBindDynamicMemoryView. While standard memory views are part of descriptor sets, attached via grAttachMemoryViewDescriptors.

The command buffer state only has one slot for dynamic memory views, while it can have arbitrary numbers of descriptor memory views. What’s the difference?

The single dynamic memory view seems to be shared among all shader stages, with shaders mapping into it via GR_SLOT_SHADER_RESOURCE and GR_SLOT_SHADER_UAV association. But other than that… what does it mean for the hardware?

How is this different from the UBO/SSBO distinction? It’s not, save for the fact that Mantle’s way is a lot more opaque to the user. At least the UBO concept has the important difference as part of it’s name: uniform buffer object. UBO’s can’t be changed by shaders, while SSBOs could be, depending on the shader.

The reason there is a distinction at all in OpenGL is because UBOs represent data that is constant during rendering. Therefore, UBO data is usually copied into constant storage accessible by shader hardware for a particular stage. Groups of different shaders all executing the same stage in the same rendering command will share constant storage. This means that accessing UBOs from shaders usually is not accessing global memory, so it’s generally faster.

I assume that “dynamic memory views” represent the same concept, but it’s hard to know without getting into some D3D-isms.

So it seems to me that Mantle has plenty of “leaky, costly abstractions” too.

BTW, the requirement of memory state transitions sounds good to me. You have to think about these things anyway!

No you don’t. If you render to a texture, then read from it, all OpenGL requires is that you stop rendering to it before you try to read from it. That is, you bind a new FBO (or remove the texture from the current one). Conceptually, you’re not thinking about a memory state transition; you’re just following the rule about not reading from a render target (which is a conceptually dubious prospect). Mantle requires that you do a memory state transition as well as no longer rendering to the texture.

it should be better than peppering the code with comments à la “XYZ cleared for writes” or, worse, insufficient comments on what is read-only at a given time.

I think you’re confusing SSBO’s incoherent, asynchonous operations with the usual coherent, synchronous OpenGL operations. Most of OpenGL operates in a coherent, synchronous way; the API takes care of things behind your back. If you upload data to a buffer, you don’t have to issue a barrier or wait for a while before using that buffer for vertex data. OpenGL takes care of any synchronization or cache clearing between the upload and the usage. If you write to a buffer via transform feedback, then immediately turn around and render from it as a source for vertex data, OpenGL again takes care of the synchronization and memory coherency.

Only Image Load/Store and SSBOs (and persistent mapped buffers) require you to know “what is read-only at a given time”.

Actually, there’s an interesting point on the memory view vs. dynamic memory view thing relative to UBO/SSBOs. I still don’t know what dynamic memory views are supposed to represent hardware-wise, but I realized where Mantle has the distinction between UBOs and SSBOs.

It’s in the shader.

A descriptor set has indices, and memory views are associated with indices, as the user desires. Shader resources reference these indices, which is how shader resources get access to their data. But it’s the nature of the shader resource itself that determines whether it is used as constant memory or via a global memory access. The API-side of things doesn’t care.

OpenGL is mostly the same way. Technically, there is no such thing as a “UBO” or “SSBO”; there are only buffer objects. It’s perfectly legal to use one part of a buffer for UBO fetching, while another part is used for SSBO access. All in the same shader and rendering call. The only difference in OpenGL is which binding point you use to bind them to the context with glBindBufferRange: GL_UNIFORM_BUFFER or GL_SHADER_STORAGE_BUFFER. Mantle basically takes away even that difference.

But the difference in concept is there for both systems. And Vulkan (according to the SPIR-V spec) also seems to recognize the difference, though that is still rather in flux. So the distinction exists in both APIs, because the distinction exists in the hardware.

If a graphics API requires more than 5 commands to draw a simple flat shaded triangle (excluding the window-specific setup code), then it’s not qualified to be called graphics API. I can see “Vulkan”, Mental, Metal, and DirectX12 are driver-level “APIs” that require a lot of work on the user-end to show something on screen not to mention the required knowledge of all hardware and memory internals…
My word of advice is to focus on Improving OpenGL, and go back start from version 1.2 and keep it simple. Let the driver writes do the hard work for us CAD developers. Enslave them :smiley: Just kidding. And the idea that several companies are working on same low level APIs with different names is just plain stupid and retard.

For now and until I see something working and simple, I will use software rendering. :wink:

If you are willing to use software rendering, if performance is that irrelevant to you… why are you posting here? It’s like someone going to a forum for sphere makers and complaining that they’re not making cubes.

You need to grasp the concept that other people have other needs. And those needs, while different from yours, can be perfectly valid.

Vulkan is an API intended to provide the maximum possible performance without actually being the driver. If ease of use matters more to you than performance, that’s great; Vulkan is not meant for you.

But that doesn’t mean it shouldn’t exist.

My word of advice is to focus on Improving OpenGL, and go back start from version 1.2 and keep it simple.

You’ve mentioned this several times, but you never seem to actually explain what you’re talking about.

What is “simple” about ARB_multitexture and the interaction with texture environment chaining?

And what “improvements” can you make over OpenGL 1.2 that are in any way “simple”? Shaders are, by their very nature, not simple, so those are out. You could do something like register combiners or ARB_crossbar, but those are really not simple (shaders are simpler). Non-normalized integer textures and floats would require lots of changes to how texture environment works, which makes it a lot less simple. You could provide more fixed-function vertex processing like ARB_vertex_blend, but that just adds complexity, eventually to the point of ARB_crossbar.

Any changes that allow you to use GPU memory (any and all buffer object uses) are much less simple than using CPU memory. Indeed, most performance-enhancing tools are varying degrees of more complex.

So what’s left? New graphical features are all more complex than 1.2. Performance-enhancing features are more complex.

There is no direction leading from OpenGL 1.2 that offers genuinely more useful functionality while “keeping it simple”. Graphics is complicated, and any API high-level enough to make it simple is not a graphics API; it’s a graphics engine.

And we have plenty of cross-platform engines available on the market.

Wasn’t it to some part the CAD-related companies and developers who kept Khronos from significantly changing OpenGL in version 3.X (more OO etc)? Seems like some CAD people like gloptus would also love to retain immediate mode from OpenGL 1.2, or what was it that made 1.2 special to you? I at least do not understand this viewpoint at all, 1.2 was pretty terrible, did you ever try to do FBO related stuff cross-platform in that version? A pure mess and waste of time. In OpenGL 3 we got FBOs as core and do not have to worry about such things. FBOs, vaos, etc. are pretty common usage in most projects and I find it annoying having to work with the uncertainty about these features given in the old versions. Even OpenGL 2.X suffers from these same issues. Also let us not forget about performance, which, for non-trivial rendering, has way higher prospects in the latest OpenGL versions than in 1.2. And this is, after all, what OpenGL exists for: being able to render in real-time or interactive frame rates (or at least cutting some months of time that your render-farms require for rendering :wink: )

Like Alfonse said, if you do not care about performance and only care about simplicity and are trying to do very ordianry things that do not require specific features, then graphics engines are the right thing to use.