'W buffering" with opengl, how to?

michagl, you must have a broad way of rendering everything in the background and everything in the foreground separately. That’s all it takes, as you need to concentrate a large range of the zbuffer in the foreground only when rendering the foreground.
Basically, setup a frustum with the near plane being at, say 100, and the far plane at 10000 for the 1st slice, draw the scene, clear the zbuffer, then set up the frustum with the near plane at 0.1 and the far plane at 100 for the next slice, draw the scene and swapbuffers.
The standard view frustum hardware clipping will ‘stitch’ the two slices together seemlessly.
Your frustum culling code will optimise away any unnecessary work…but if you haven’t got any culling method, it won’t affect the visual results, only performance.

Originally posted by knackered:
michagl, you must have a broad way of rendering everything in the background and everything in the foreground separately. That’s all it takes, as you need to concentrate a large range of the zbuffer in the foreground only when rendering the foreground.
Basically, setup a frustum with the near plane being at, say 100, and the far plane at 10000 for the 1st slice, draw the scene, clear the zbuffer, then set up the frustum with the near plane at 0.1 and the far plane at 100 for the next slice, draw the scene and swapbuffers.
The standard view frustum hardware clipping will ‘stitch’ the two slices together seemlessly.
Your frustum culling code will optimise away any unnecessary work…but if you haven’t got any culling method, it won’t affect the visual results, only performance.

there is no foreground and background… its a steady amalgom of terrain, clouds, water, and eventually other natural artifacts and inhabitants all the way from the front to the back. the whole scene is pushed through clod so that the background can be insanely far away without running into traditional crowding at a distance.

i can’t use flood fogging, because the nature of the world i’m rendering is in its untraditional makeup. its in a cylinder, the horizon curves up, and there are cables here and there holding the whole world together against centripetal gravity. fogging it out to infinity defeats the whole point. the air is generally said to be prestine as well so that you can see the furthest lands until they disappear behind the 180 degree upward curving horizon.

i’m pretty certain i’m going to go for per pixel depth fragments. the only caveats remaining, is is it legal to use a pixel shader without a vertex shader bound, and if so, what should i expect as inputs. the way cg is setup, it seems i can declare the inputs i’m interest in the fragment shaders argument list. but i’m not sure what i should do exactly. am i going to have to create a fragment shader for every possible rendering state? should i just do frag shaders for the optimal ‘path’ rendering states? and should i try to do a general purpose custom frag shader that can somehow accept most any rendering state and all inputs? should i just give up on rendering debug lines for instance to the depth buffer? or can i somehow produce a shader which can accomodate a large swath of general purpose rendering states? does the hardware change the static pipeline shader as its rendering states change? etc etc… i have a feeling what i will end up doing is just revert to the junk depth buffer routine when not rendering through the optimal ‘path’.

Originally posted by knackered:
michagl, you must have a broad way of rendering everything in the background and everything in the foreground separately. That’s all it takes, as you need to concentrate a large range of the zbuffer in the foreground only when rendering the foreground.
Basically, setup a frustum with the near plane being at, say 100, and the far plane at 10000 for the 1st slice, draw the scene, clear the zbuffer, then set up the frustum with the near plane at 0.1 and the far plane at 100 for the next slice, draw the scene and swapbuffers.
The standard view frustum hardware clipping will ‘stitch’ the two slices together seemlessly.
Your frustum culling code will optimise away any unnecessary work…but if you haven’t got any culling method, it won’t affect the visual results, only performance.

i gave this some more thought, and i think there may be something to it worth implimenting if the numbers work out ok.

my environment is partitioned pretty well. it will be very tricky to impliment, but basicly i think if i was two keep three different back planes set by the user. every node in the partition that falls behind the front back plane could be drawn. then the depthbuffer could be cleared. then everything falling behind the front back plane but in front of the middle back plane would be rendered only to the depth buffer. then everything in front of the front back plane would be drawn, and that should allow for everything to be stitched together properly without artifacts. there would be two front planes corresponding to the first and second rendering groups. rendering the middle group twice only to the depth buffer teh second time allows the the depth buffer to remain coherent without polluting the colour buffer were intermediate geometry interpenetrates.

edit: took me three tries to get that description right.

it would be some decent work to impliment… especially as an optional run-time feature versus per pixel depth solving. but i will probably eventually give it serious thought. especially if the per pixel depth solving causes any sort of bottleneck.

edit: oh yeah, forgot about the back plane being hardware clipped. that makes things a lot easier. i looked at the numbers and a 100 near plane makes it without artifacts. so i really only need two back planes, or one per slice. i can actually easilly impliment this the way the system is set up simply by registering two different frustum nodes tied to the same monitor. i should be able to impliment this in minutes tomorrow, and it should run quite effectively and is much more robust than using per pixel depth solving. the only problem i can think of for using depth slicing planes is fog is outlawed i figure. anything else?

oh and… thanks a million knack!!! good idea. you are forgiven for all your nastiness. well maybe not forgiven, but you’ve proved to be more useful than not.

sincerely,

michael

I’m afraid this thread is now unreadable without 21" monitor…

Yes you can!

really? I was writing a shader a few months ago (maybe 6) in GLSL and I swear I couldn’t adjust the fragment depth.

you can write to gl_FragDepth?

alright, my bad…

Originally posted by MZ:
I’m afraid this thread is now unreadable without 21" monitor…
i agree, my primary internet terminal is an 800x600 ‘sub-notebook’ portable. i wonders if the developers of this bbs use their own system?

i’m planning to set this up as soon as possible. i thought i could just hack it together today for the short term. but system isn’t frustum centric, and multiple manifolds coexist with their own local frustums which must be synchronized with a system level camera node. so i’m having to add the concept of sections at the camera node level, which is pretty deep. and since i’m going this far i’m just going to do it all right so that the frustum culling loop/recursion can manage multiple sectin bits. i will update this thread with results as soon as i get them. i have a feeling it will work out really good, unless there are minor precision problems where the opengl frustum clipper isn’t pixel perfect across passes. i may just have to fudge that and hope the artifacts are minimal.

if anyone understands the numbers really well, i wouldn’t mind some discussion about synchronizing fog across multiple z buffer planes if it is possible. in my immediate case i think i should be able t get by with just two slices, maybe three if i really up the scale and can handle the geometry precision. but i figure that the fogging can just start in the last slice as any closer slices will probably be too close for fogging on a clear day. on a foggy day the frustum would just be foreshortened to the point where the fog is saturated which could probably do with a single slice. so fudging for a promotional demo shouldn’t be an issue.

but i’m still very interested in persuing here techniques for synchronizing the fogger states so that the fog will appear continuous even where the depth buffer isn’t.

this particular system is shaping up very well… this is my first serious attempt at a large scale simulation environment. i’m generally accustumed to very local simulations and non physical systems and interface programming. i’ve optimized teh system pretty well. the only remaining visible cpu bottleneck is in the mesh generation step. i’m working to optimize that with SSE assembly coding which it is very well suited for. its a bit tricky because it is the work horse of the system and needs to come in many different flavors.

the only thing keeping it from a promotional demo right now is a lack of a quality data base. if anyone fancies themself a map artist and has some free time, i would be interested in sharing the details of this work and see if we can’t work out a mutually lucrative arrangement. i have a very high resolution earth topology data set, but earth colour maps are much more low resolution, and various sorts of detail maps completely non-existant. the scale of earth is also too large for 64bit floating point precision. so i’m working with a smaller goal in mind. i have an artist passionate about the target world i’m attempting to realize, but he is always too busy to be of much use. of all the systems i have under my wing that i’m able to share publicly, i believe this process is a prime candidate for drawing serious interest.

micha,

Do you have any screenshots of your system at work?

You’ve been talking about this system quite a bit and I’m curious to see what it’s capable of, if you don’t mind :slight_smile:

…I like screenshots…they make me happy.

Originally posted by michagl:
i have a very high resolution earth topology data set
What’s that, the 90m SRTM dataset? I found those datasets to have quite a few data holes but maybe they’ve fixed those now.

[b]

earth colour maps are much more low resolution, and various sorts of detail maps completely non-existant[/b]
Can you not generate your own colour maps from the height/slope data. Thats what I’ve done. Or are you are looking for more realistic detail than that?

Originally posted by Aeluned:
[b]micha,

Do you have any screenshots of your system at work?

You’ve been talking about this system quite a bit and I’m curious to see what it’s capable of, if you don’t mind :slight_smile:

…I like screenshots…they make me happy.[/b]
yeah i will put some here as son as i can get around to it.

i have some older vertex lit screenshots that still look nice of a pseudo photo-realistic shot of the himalayas.

i have some newer superior images, but they are more limited by the available data set and are not lit presently because of the nature of the world is light comes from the middle and the world is inverted, so shadows are a rarer phenomenon in such a world under natural lighting conditions. plus the data is just mock stuff i cobbled together, but i will produce some screen shots.

Originally posted by Adrian:
[b] [quote]Originally posted by michagl:
i have a very high resolution earth topology data set
What’s that, the 90m SRTM dataset? I found those datasets to have quite a few data holes but maybe they’ve fixed those now.

[b]

earth colour maps are much more low resolution, and various sorts of detail maps completely non-existant[/b]
Can you not generate your own colour maps from the height/slope data. Thats what I’ve done. Or are you are looking for more realistic detail than that?[/b][/QUOTE]the largest topology data set i have is the etopo2 projects data. it is sampled at two minute arcs i believe, and i can’t find any artifacts in it, topology and bathymetry… that is earth and seas, in signed 16bit integral format. a cylindrical mapping yields a 10800x5400 pixel map which isn’t power of 2.

as for colour i definately prefer a map to ramping. even if i had hires colour maps though… the colour map has to go in texture memory, and presently i can’t break the map up beyond the base geometry. which for earth to get a proper cylindrical mapping for a bipyramid, means the map can be cut 4 times verticly and once horizontilly. my card doesn’t support maps larger than 512x512. so until the sister system which handles arbitrary triangular multiresolution map streaming is fairly mature, the largest colour map i can handle for earth is 2048x1024.

i’m not aiming for earth right now though. i’m going for a cylindrical world that is 200km wide and 13000 kilometers in diameter. and its pretty trivial to break up a cylindrical mapping for a cylindrical world.

real quickly here are some screens… i will add some words when i get a chance to later:

http://arcadia.angeltowns.com/share/genesis19-midres.jpg

http://arcadia.angeltowns.com/share/genesis17-lores.jpg

http://arcadia.angeltowns.com/share/genesis-gaea6-lores.jpg

the first two are quite old, the last i took today.

i will say something about the third later as it isn’t really representative of what the system is capable of in many respects. especially in the meshing which is as bad as it gets for surface fitting… ie: unnecesarrily craggy. the first thing to do on my todo list is to reverse the meshing which i suspect should produce one of the best possible regular meshes for surface aproximation. its just a matter of offline edge flipping.

i will add more later.

what does it mean i wonder when people request screens and don’t follow up?

in any case here are a couple newer images.

http://arcadia.angeltowns.com/share/genesis-hyperion4-lores.jpg

http://arcadia.angeltowns.com/share/genesis-hyperion5-lores.jpg

i’m on vacation from my development machine right now, so there have not and will not be any new development screens for about another week.

in the first image i’ve successfully reversed the mesh producing perhaps the best possible tiling for smoothly aproximating arbitrary surfaces. it is basicly a tightly packed grid of octagons with a vertex in the middle of each, and a vertex in the middle of the diamond created in the space between the octagons(assuming lod is constant). anyone using displacement mapping might want to look into this. the properites are really amazing. much better than a right triangle fan tesselation, and infinitely better than a regular stripping. i haven’t tested it directly yet, but i have a feeling its properties for vertex shading are astounding as well.

the second screen is just the beginning of a volumetric CLOD cloud shader i’m working on in my spare time. there are typical bubble type blending artifacts, but i haven’t applied a view dependant volume based pixel shader yet… which i’m thinking should cancel out those artifacts well enough, as the artifacts are really just the result of not taking into account the volume of the mesh.

For realistic terrain maps,
worldwind from Nasa will give you the most detailed data freely available that I am aware of.

About tesselation, your octogon-diamond method seem a bit strange to me, what about even hexagons, with a vertex at the center ?

Your screenshots looks very promising.

Can I ask what is the application ? Some RTS on a cylindrical world ? Simulation of a giant spaceship a la Rama ?

Originally posted by ZbuffeR:
[b]For realistic terrain maps,
worldwind from Nasa will give you the most detailed data freely available that I am aware of.

About tesselation, your octogon-diamond method seem a bit strange to me, what about even hexagons, with a vertex at the center ?

Your screenshots looks very promising.

Can I ask what is the application ? Some RTS on a cylindrical world ? Simulation of a giant spaceship a la Rama ?[/b]
i will look into the maps. we will be needing detail textures and topology maps are always useful to sample from.

well the octagon diamond method is definately ideal for my system because it is naturally arrived at algorithmicly through a series of operations generated by a clod process. but equilateral hexagons might do better as you say.

thanks for the ‘promising’ bit.

the application is a robust out of the box vr simulation environment. the promotional demo is a la Rama, but it is actuall gaea from John Varley’s most prolific works Titan, Wizard, and Demon. Tom Clancy is quoted as saying ‘john varley is america’s best writer’. a quote which would probably drive most of clancy’s ‘conservative’ followers up the walls.

check them out, best american scifi ever in my experience.

it turned out that i had left in some debug code that essentially disabled filtering during the displacement process.

for what its worth i updated the image with filtering enabled:

http://arcadia.angeltowns.com/share/genesis-hyperion5-lores.jpg

the cloud in the foreground looked real bad, but i was in a hurry that day to get out of the house that day, and i just assumed it was being exagerated by the blending effect.

as it turns out, with filtering the blending overlap pretty much goes away to a large degree.

btw you dont necessary need to alter the z value of the fragments - its not widely supported yet and is expensive (extra fragment instructions). it may be sufficient if you calculate in appropriate way the z and w vertex coords in the vertex shader/program. take into account that after the vertex shader z is divided by w, so you can multiply z by w to effectively cancel this. also take into account that the z values are interpolated linearily in window space (with no perspective correction) by the rasterizer, which may be a problem if your scene contains polygons that spam too big distance in the z direction, so that this linear function get too different from whatever your function in the vertes shader is.

with perspective projection the standard function that calcs the value for the depth test is effectively 1/z (z being the third coordinate), that is z/w with w being set to z and z being set to 1 by the perspective matrix, whereas the so-called w-buffer in d3d effectively uses just z - linear function (it uses w, but w was set to z by the perspective matrix). both arent good. generally the ideal function would be log(z).

Originally posted by l_belev:
[b]btw you dont necessary need to alter the z value of the fragments - its not widely supported yet and is expensive (extra fragment instructions). it may be sufficient if you calculate in appropriate way the z and w vertex coords in the vertex shader/program. take into account that after the vertex shader z is divided by w, so you can multiply z by w to effectively cancel this. also take into account that the z values are interpolated linearily in window space (with no perspective correction) by the rasterizer, which may be a problem if your scene contains polygons that spam too big distance in the z direction, so that this linear function get too different from whatever your function in the vertes shader is.

with perspective projection the standard function that calcs the value for the depth test is effectively 1/z (z being the third coordinate), that is z/w with w being set to z and z being set to 1 by the perspective matrix, whereas the so-called w-buffer in d3d effectively uses just z - linear function (it uses w, but w was set to z by the perspective matrix). both arent good. generally the ideal function would be log(z).[/b]
thanks, i will take this to heart.

for now my strategy though is to divide the scene up into depth slices as described above. the only major draw back to this aproach is you must render the slices back to front, which could cost a lot depth culling gain. but generally two slices will probably do. the closest being quite small in relative terms.

also non linear fog might be difficult to synchronize. i’m planning to use custom fog for this particular project though. not sure if i will use fogcoord or just a straight distance operation. as long as fog is only desirable in the farthest depth slice normal fog is no problem.

Originally posted by knackered:

michagl, you must have a broad way of rendering everything in the background and everything in the foreground separately. That’s all it takes, as you need to concentrate a large range of the zbuffer in the foreground only when rendering the foreground.
Basically, setup a frustum with the near plane being at, say 100, and the far plane at 10000 for the 1st slice, draw the scene, clear the zbuffer, then set up the frustum with the near plane at 0.1 and the far plane at 100 for the next slice, draw the scene and swapbuffers.
The standard view frustum hardware clipping will ‘stitch’ the two slices together seemlessly.Your frustum culling code will optimise away any unnecessary work…but if you haven’t got any culling method, it won’t affect the visual results, only performance.
We tried this solution. The frustum clipping was far from seamless. Gaps appeared at the boundary. Overlapping the near/far planes closed the gap but caused objectionable results due to double blending of transparent polygons. Does anybody have a better idea?

Originally posted by macarter:
We tried this solution. The frustum clipping was far from seamless. Gaps appeared at the boundary. Overlapping the near/far planes closed the gap but caused objectionable results due to double blending of transparent polygons. Does anybody have a better idea?

well that is both good and bad to hear. i assumed if knackered was offering the advice, this was a tried and true approach… i have to give knackered credit for creative thinking though.

the only idea i can come up with is to go with my original idea before i was persuaded to rely on frustum clipping. you would have to render the transparent geometry in the invisible interstitial region to the depth buffer. that should stop the blending overlap. but this approach isn’t as ‘beautiful’ as relying on hardware frustum clipping (the accuracy of which may differ on varying cards) though. if enough people actually wanted to use this technique then effort might be made to improve the precision of near far plane frustum clipping.

i really don’t know. i haven’t implimented this yet. i’m basicly just taking screenshots from far enough away as to not clip closer geometry.

Overlapping the near/far planes closed the gap but caused objectionable results due to double blending of transparent polygons
how?, youre not drawing the object twice, once in each partition r u.