Hello everbody.
(my english is not very good, but it’s not my native language (“mother tongue” is the correct expression I think))
I am still working on that, get a program with a « not so bad » api and rendering in one pass [1].
So, here is the problem.
Imagine a FPS (first person shooter).
You have a starfield, behind everything.
Then you have a planet A. The planet A is closer to us than the starfield, but behind everything else.
Then you have a planet B, on a lower orbit. B is closer to us than A (and the starfield) but behind everything else.
Then you have clouds (etc).
Then you have the scene (etc).
Then you have the hands/gun of the player (etc. They are closer to us than the rest of the scene because we don’t want them to clip through walls and so on).
Then you have the HUD : health bar, ammunitions. Again : we don’t want hands/gun clip with them.
You can solve this kind of thing by doing one (or more) render pass for each of these elements. Between each pass you discard the depth buffer and voilà.
So, here is my question (at last) :
=> is it possible to add a « macro-depth » to the depth test ? <=
This one would be an integer, not a float interpolated between vertexes. The depth test would then be something like :
if ( macro_depth_A < macro_depth_B )
discard ;
else if ( macro_depth_A > macro_depth_B )
accept ;
else
do_the_old_depth_test(A,B)
This is pseudo-code, of course.
I already did some research on that, and to be honest I used to believe that the depth test was performed after the fragment shader. It seems it’s not the case (for performances reasons, I approve ).
However, I am pretty sure that it’s done after the vertex shader. I don’t see how it could be done otherwise, because the vertex shader decides the final position of vertices.
So I don’t see any technical reason making this not doable.
Is this part of the fixed pipeline accessible ?
If yes, in which version ? (3.x ? hope)
And then how to do it ?
Any help would be welcome.
P.S. : with that macro-depth, the solution for the problem I gave is, of course : give a different macro-depth to the starfield, planet A, planet B, clouds, scene, hands and HUD.
Starfield=7, PlanetA=6, PlanetB=5, Clouds=4, Scene=3, Hands=2, HUD=1
Or this multiplied by 100 to allow the position of elements between them later.
The macro depth would be given as a per-vertex value, and reproduced as-is into the “macro-depth”.
[1] : it may seem a bit weird, but actually « do two separate render pass » is a (wrong) solution to a lot of problems. For this reason, the naïve approach use it all the time, ending with programs with thousands of render pass and terrible performances.
Reducing the number of render pass implies to solve (again) problems already solved with the wrong solution.
Problems solved by the wrong solution includes :
- « I have two object with different transformations matrices » => « do two render pass » (solved)
- « I have two different materials in my object » => « do two render pass » (solved)
- « I have a skybox behind my scene/a hud in front of my scene » => « do two render pass »
And somehow it became personal. Yes, I perfectly understand that, in some case, doing two render pass may be the best solution but … I had too much problems with that solution, I want, I need a program with one single pass, it’s between me and that « solution », personal feud XD
Also, once I will have this one pass approach, I still can split it in several pass, artificially, like you can have multiple thread to do one job. But then the number of pass will be a personal choice, not something forced on me.
Like s** : it’s better when it’s free.
(I said it : it became personal XD)