displacement mapping

it looks like the nv30 does displacement mapping by
*tessleating the surface into vertices first and then displacing those.
All other cards manufactures do this also (my software implementation does as well)
it is logical + simple i must admit
but something in my head is nagging me that this is not the correct/most optimal way.
does anyone have any links to papers that DONT do displacement mapping this way, thanks

In which other way could it be done ?

sorry i wasnt clear enuf but that was my question

there are two other ways that come into my mind… a: perpixel displacement… means shifting the actual pixel… i think this could be approached in some way by rendering the displacementvector with the undisplaced mesh on a buffer, and then use that to displace the shaded image, or so just, perpixeldisplacement… how ever done

the other way would be fur-rendering… or call it raytracing/raycasting. you have the two extemes you render, and a) you raycast from the entering of the outer mesh till you get out again, or hit the inner mesh, or, you do it the way the fur rendering does, slicing…

those are about the ideas i have…

the last one, the fur approach, is the most interesteing one in so far, that you can get all sort of “on_surface_volumetrics” there in… fur, fog, fluids, displacement, grass, what ever a surface can “cover” can be rendered like this…

davepermen,

  1. about your first method :
    That looks potentially efficient, but there’s a problem : your surface will have holes where the pixels have been displaced from and no other pixel has been displaced to.

(Was this statement comprehensible ? I do not master the english language well enough )

Here’s an idea : we could draw the displaced pixels with a larger size. For example, if a pixel gets displaced 2 pixels away, we could draw it as a 5x5 pixels little quad… so there are no holes anymore… That could give a look like in Comanche1. Do you see what I mean ?

  1. about the second method :
    that looks very nice and potentially photorealistic, but, as you’ve mentionned, it actually needs raytracing. So it can’t be achieved in realtime as for today. Nevertheless I find it so attractive that I’ll try to think about it : isn’t there any way to do it without raytracing ?
    Unfortunately I don’t see how a pixelprogram could do, since we don’t know which pixel to draw.

Morglum

my first approach… yeah, rendering the scene with GL_POINTS and glPointSize “voxelize” the whole…

i know what you mean, i think “gaps” fits bether than “holes”

the approach with the afterwards displacement should not generate gaps, but interpolate the image between, dunno what sort of sideeffects that will have

my second approach can be done with slicing as well, rendering several slices “above” the mesh, read up into fur-rendering to know what i mean…

Here’s another way:
http://www.mpi-sb.mpg.de/~jnkautz/projects/hwdisplacement/

Basically a 3D volume rendering approach in a thin zone around the surface.

Another, using a depth based image warp, that would fill in the ‘gaps’.
http://graphics.lcs.mit.edu/~gs/research/dispmap/

Here’s another texture based approach that warps the texture based on derivatives before being drawn in a conventional way:
http://www.cs.sunysb.edu/~oliveira/pubs/RTM_low_res.pdf

It does require fragment coverage on the final render but this really relates to how you build your database.

Originally posted by dorbie:
[b]Here’s another way:
http://www.mpi-sb.mpg.de/~jnkautz/projects/hwdisplacement/

Basically a 3D volume rendering approach in a thin zone around the surface.

Another, using a depth based image warp, that would fill in the ‘gaps’.
http://graphics.lcs.mit.edu/~gs/research/dispmap/

Here’s another texture based approach that warps the texture based on derivatives before being drawn in a conventional way:
http://www.cs.sunysb.edu/~oliveira/pubs/RTM_low_res.pdf

It does require fragment coverage on the final render but this really relates to how you build your database.[/b]

the first one you suggest is basically my fur-rendering/slicing way… the 3dvolume sounds much bether, thought

the other one looks cool its sort of the first way i suggest, some just rendering the whole, and then displace it afterwards in screenspace on the image… i think at least it is about what i ment to try to explain

Originally posted by zed:
it looks like the nv30 does displacement mapping by
*tessleating the surface into vertices first and then displacing those.
[…]
does anyone have any links to papers that DONT do displacement mapping this way, thanks

Extend the polys to the bounding volume of the poly + displacement map and raycast the volume .

thanks very much for the linkis guys/girlz
ill be busy reading those over the next few days/weeks.
btw is the only reason all the card makers doing disp-mapping by tesselating because ‘its nowadays cheap to send lots of vertices to the card’ or ‘its simple thus less room for cockups’ or a bit of both? thanks

Originally posted by zed:
thanks very much for the linkis guys/girlz
ill be busy reading those over the next few days/weeks.
btw is the only reason all the card makers doing disp-mapping by tesselating because ‘its nowadays cheap to send lots of vertices to the card’ or ‘its simple thus less room for cockups’ or a bit of both? thanks

i think its because of two things:
a) tesselation of geometry should come into hw anyways (trueform was just the beginning )
b) iff you have tesselating, yeah, doing it by displacing vertices is really simple
c) it takes one pass to render… (the fur-approach takes several slices => very fillrate intensive)…
d) it looks like a polygonal mesh think about it, the fur rendering doesn’t look like a simple polygonal mesh anymore (except you use TONS of slices, HEHE ), the perpixelshift with GL_POINTS will look, uhm, like outcast or deltaforce (is it that?..dunno, HEHE ), wich is not likely what most want…
tesselating and displacing doesn’t change the look of the mesh, and thats the power of it…
e) i think its quite cheap… at least, looking at the BenMark of the r300, i see it can push trough 3x the triangles of the gf4ti4600… i think with this additional power, displacementmapping with tesselation makes much sence…

i’m now interested if we could tesselate a mesh, and render it with GL_POINTS if it would look then like some fancy volumetric voxel stuffy… hehe . (thinking of the horse of 3dmark2k1… using pointsprites… i bet it would look cool, hehe )… could be useful for cloudrendering…

I’ve read the first link posted by dorbie. Its ‘slices’ being in fact numerous small polygons, I begin to wonder whether the ‘nv30 displacement mapping’, which is also based on the creation of many small polys, is not a better approach. It seems to require approx. the same number of polys.

davepermen,
do you have any idea to implement the ‘voxelization’ approach ?
I’m just considering a 3d-texture (which is slightly different from what we were talking about), with reasonnable depth (maybe 8 pixels ?). That would maybe be fast, because we can compute the 3d texture as a pre-processing step.

Morglum

I have a software method that creates a continious patch from a depth map using a scan line conversion technique. A displacement function would probably be implemented in the same way. What kind of conclusions can you make out of such a method then… It is rather important to see the differences in making a path out of the pixel “corners” or the center of a pixel. It is also vital to have methods to be able to peel of (kind of a filter) the edges of the patch with a number of depth dependant pixels. Kind of a polygon offset for the patch.

Check the ‘nv30 extensions’ thread, it looks like mcraighead has proposed an original way for displacement mapping. He adds that, using Nv-specific functions, it should be possible to make it run not too slow.

Yes. I have seen that and I have also used that before to just read in the z values into a prepared x,y,(z) prepared and strided buffer, but I got better performance using the scan line version because I could combine data from non varying z buffers which led to fewer data sent to the GPU. Of cours by using the GFX memory and not transferring it to the CPU mem It would probably be a lot faster. Still the issues with filtering sustains… But I beleive still this is the future for generation of shadow volumes etc. :wink: I sent the idea to NVidia 6 months ago but they said it was impossible to get the texture pipeline data into the pipline for the vertex geometry. I hope if we can see a lot of advantages with displacements maps, we would get a standard API for this and perhaps some nice HW implementations !!

Is it me or do the first 2 links describe the same idea except that the first renders many slices to give the illusion of volume, and the second is just for showing a non furry surface so it needs just 1 slice. The warp method somehow fills in the gaps?

V-man

No. They are completely different. The first is effectively a ray path integration through the displacement volume using slices, as Dave says it is like fur rendering BUT fur rendering tends to slice with shells and walls, and the paper also talks about different slice orientations for integration, including eye space slice integration just like volume rendering. The method requires a 3D volumetric texture representation. The second paper draws tangent space polygons and uses the depth derivative information to warp the result to a better approximation of the displaced fragments. i.e. it warps the rendered fragments to the correct displaced shape. The methods seem as different as it is possible to get. The second and third methods are not the same either (but are closer cousins that the first two), oliviera et.al. prewarps the texture based on depth and the anticipated derivatives after projection, a pre-warp in tangent space, the MIT paper warps after projection using screenspace derivatives a screenspace warp. The difference has huge implications for the applicability of the algorithm.

[This message has been edited by dorbie (edited 09-01-2002).]