Model shaders proposal

Hi! i’m a rare visitor here so may be some similar subject has already been discussed and i just missed them, but i didn’t manage to find anything on this board so here i am.

My proposal is based on a slightly different view of shaders. Today shaders are in fact fullfledged programs running within a graphical card environment so generally speaking they can be invoked like common programs or like scripts on a web server. When u send a request for a web page u dont tell a script from which database to get page body, how to decorate it and so on. But when u render something u need to send every vertex to get entire scene rendered. So my proposal is to introduce in OGL3 a new type of shaders - model shaders. They will accept arbitrary parameters from programs runnin’ in RAM and produce vertices or fragments or just tune a graphical card. When you want to render say a car in your racing sim you need to send all the vertices (explicitly or via lists or arrays of vertex data) that form the model. But with model shaders u can peek some main vertices and send them to a shader that will generate all other vertices. Also you can preload all texture and vertex data into graphical card’s memory and just call a shader that will do all the rendering itself. You can create a separate shader for every model if their geometries are too different. For highly dynamic models like water surfaces u can send all needed data only once and just invoke a shader with one parameter afterwards (or with no parameters at all if it has access to some permanent storage where it can save its current state). So you can entirely push your rendering code into shaders and leave only game logic in your program. Using such shaders will improve performance in times because programs will only need to send little chunks of data when they need to render some large and complex scenes. Of course model shaders should be used in conjunction with other types of shaders. I guess they should be invoked before vertex shaders and have access to the pipeline input.

I can’t say i’m a big pro in ogl so may be my approach leaves much to be desired but i just wanted to share my idea with you.

Jay

Using such shaders will improve performance in times because programs will only need to send little chunks of data when they need to render some large and complex scenes. Of course model shaders should be used in conjunction with other types of shaders. I guess they should be invoked before vertex shaders and have access to the pipeline input.

When you make GL calls, these are just commands that go the the GPU at the right moment. I don’t see how your suggestion is going to improve performance. Basically your suggestion is hardware supported display lists but that’s not going to happen in GL3. GL3 might still keep display lists but in a much simplified form.

When you make GL calls, these are just commands that go the the GPU at the right moment.

Yeah you are right, they ‘go’ - go every time you’re rendering something but with a shader they will be already in the graphical memory. The slowest thing in a GL call is data transfer from your program to GL driver and back.

Basically your suggestion is hardware supported display lists but that’s not going to happen in GL3. GL3 might still keep display lists but in a much simplified form.

Please dont forget that display lists are static and their power cannot be compared with the power of a shader even if we talk about existing types of shaders.

This is nothing new, and I remember suggesting something like that few years ago. You can also look at the curved surface thread for more recent info. AFAIK, some sort of tesselation shader is already in planning.

Ok i’ll look at the thread tomorrow when i’m more awake.

The trend is toward moving as much of the work to the GPU as possible, but content creation has traditionally been the domain of the artist and his or her tools (procedural stuff aside). Some models require some pretty sophisticated preprocess (optimizing vertex layouts, LOD computation, and such), but that doesn’t mean that access to scene geometry wouldn’t be useful in a generic sort of way, which will be necessary in one form or another if and when “GPU assisted” raytracing ever rolls around.

Maybe your suggestion is more along the lines of generic scene geometry access, in which case I’m sure we’d all agree it would be useful, but even optimistically that’s probably several generations of hw away.

Just guessing but mapping arbitrary access patterns is likely to be a sore spot and/or the size, layout and cost of the cache. It is doable though.

It sound’s a lot like something i have been thinking about for some time in several different incarnations, i think i called it object shader, render shader and whatnot.
But basically what it comes up to is running the rendering thread on the gpu.
I think it would be a great step forward, especially now that physics is going to be running on it.
It would be beneficial to have instant feedback from the gpu to the render thread, currently you would most often have to stall the gpu for that to happen.
Sure it will also happen with this setup but it’s a fact that the gpu->gpu_renderthread->gpu lag is a lot less than gpu->PCIe->driver->cpu->driver->PCIe->gpu lag.

Yes, instant feedback from physics to graphics, great idea… it’s not like we need the pesky AI to know about the world or anything at all, or even the sound system, or input.

well that settles it then, we’ll need sound, input and ai shaders too!

Exactly, AIs doesn’t need to know everything, they are content with an extremely simplified physics system and whatnot, so is sound.
Besides booth of these are not time critical in the same way as physics and rendering are.
They run in their own threads anyway.
But now that you mention it, there has been some experiments around sound on the gpu, you know what they found, a gpu is way to overpowered to even matter (especially as all computers already have great sound processors), though i guess you could use a variation of raytracing for it.

The point is that if you’re doing a massive particle system, wouldn’t it be better to feed the result directly into the vertex shader, and if you have lot’s of rocks, debris and other things in that it would be easier if you could write a shader that fed the vertex shader with the right parts from a VBO.
And further expanding on that it would be cool if we could swap textures and shader at will.

A lot of what’s needed for doing this is already there, VBOs FBOs, texture arrays and so forth, all we need to do today is tell the gpu in what order to do what.
So as you send your rendering commands to the gpu they are now stored into a buffer as they can’t be executed all at once, it’s sort of an temporary internal displaylist.
How hard would it be to be able to make it so that we could enter some of that in advance in a shader while at the same time including things like IF:s and loops.

hi. i’m very busy now so this is probably my last post in this thread. this doc has a couple of schemes that describe my view of model shader concept.