Guidelines for selecting queues and families

Those would not be “general” characteristics; they’re specific ones. General performance characteristics are “is this memory fast”? Which is why heaps have exactly and only one flag: device local.

When we’re selecting between 16 different pools of memory, we can split hairs about exact bandwidth numbers and so froth. But for pretty much every GPU that’s ever been made, the available pools of memory can reasonably be described by Vulkan’s system. The only things developers care about are “is this the fast memory or not” and “how can I access it”.

It makes no sense to declare a system like this inadequate because of something that might happen.

[QUOTE=xaxazak;42382]It’s a good point I guess. I do acknowledge that I have a sociological bias against mobile (computers used to be tools owned by their owners, but almost all mobile devices are anticompetitive, exploitative walled gardens, and this means people can’t learn computing by toying around like the previous generation did. They’re filled with software that just wants to harvest their private data for profit. I do find it difficult to care).

So I do get that it comes down to what you decide to consider as your lowest common denominator. But the lower you go the more compromises you need to make. Potential new shader types that are totally unsuitable for TBRs, do they get overlooked? or scheduling hints that TBRs can’t follow?[/quote]

The same argument could be made in reverse: there’s functionality that TBRs could do in the future that non-TBRs cannot Indeed, there is such functionality right now: ARM_shader_framebuffer_fetch. Why should Vulkan not simply expose that directly and have non-TBR hardware have an extension to turn it off?

Oh, and there’s one other question you fail to consider. What if non-TBRs… stop existing? What if the future of GPUs really is TBRs? What if they are the more efficient rendering architecture?

Think about it. TBRs have ways of implementing many elements of deferred rendering at an exceptionally low cost, relative to non-TBRs. And since deferred rendering has become quite prominent, it starts making sense for GPUs to have specialized hardware to better support it.

But it only makes sense to do that… if there is an API that can effectively expose that support. And OpenGL (and Direct3D of any version) is certainly not that API. So if TBRs are the future, then every API that isn’t Vulkan or Metal is holding us back from it.

Oh, and I’m not pulling this idea out of thin air. I was reading this PDF, when I found this statement: “Latest desktop GPU iterations seem to be moving towards tile-based”.

Now, this is from an ARM presentation, so I was somewhat skeptical. But it got me searching, and I found this NVIDIA document and a more detailed discussion. It seems NVIDIA has had pseudo-TBRs working for years on desktops. It may not be the “transform once, store the data, then keep running over the transformed data” form of TBR that drives most mobile chips. But it is something that Vulkan’s render pass architecture maps more accurately to than does OpenGL’s FBOs.

So even if you think you don’t care about TBRs, odds are pretty good you actually do :wink:

If you think render passes are hard, you would never be able to survive in the development environment that Carmack came up in.

Carmack didn’t come up in a world with Internet tutorials, books, and specifications that explained the details of the myriad of driver architectures he had to code for. If he was lucky, he might get a manual for a particular chipset. If he was really lucky, that manual would actually be accurate :wink:

Graphics developers of Carmack’s day put in work to become what they are. Work that nobody learning Vulkan even comes close to.

You say that as if the major engines are incapable of user-created graphics operations. Using an engine doesn’t mean creating a cookie-cutter experience; the same engine that creates a lush, realistic world can be used to create a cartoony, cel-shadeded one. Most engines can handle whatever “big ideas” they come up with graphically.

Game developers only truly need to play at the level of the graphics engine if their particular game has performance requirements that the more generalized engine can’t handle. A top-down RTS that needs hundreds of units on-screen is a very specific graphical circumstance, one that can probably be handled better than a generalized engine might.

But odds are good that such developers will be backed by significant resources. So they can handle writing Vulkan code.

Well, that’s up to industry need.

For high-performance graphics work, the industry is splitting along engine lines. Those who don’t want to get into the specific details, who want to just have stuff on the screen? They’re not using OpenGL or Vulkan; they’re using engines. Engine developers want to use APIs like Vulkan, because they allow the greatest level of control.

So who does that leave? Who’s left holding OpenGL’s bag?

Basically, the ones who are left are graphics hobbiests (people who want to noodle around with algorithms and such), and intro-to-graphics-programming types. There is no major industry need for an API that lives at OpenGL’s level of abstraction.

For learners, quite frankly OpenGL 4.5 is a good enough API to learn basic graphics programming from. With DSA, ARB_clip_control, and the like, there’s nothing brain dead-stupid about the API anymore. The main problem with 4.5 is the problem 3.3 had back in 2009: nobody wrote tutorials or other learning materials specifically for it. That and that as they are forced to learn from pre-GL 4.5, they then have to learn how to read pre-GL 4.5 code.

As for hobbiests… meh, I just can’t bring myself to care. They can choose a harder-to-use clean API, or an easier-to-use API that’s not terrible.

It should also be noted that the things hobbiests and learners will mess up on in Vulkan are not the render pass system. The #1 thing they’ll screw up? Synchronization. #2? Image layouts.