Just wanted to share a simple demo with you. I have not gotten the VBO to work yet and the demo doesn’t use any lighting yet but you can see the prcinciple at least.
I have been working with these volumes for a while where I originally used depth info to generate shadow volumes. This is a side effect of the technique where you render a model that only consists of images + depth info in the alpha channel.
The technique can handle multiple views and non convex objects. The demo images is however only taken from a cube convex view.
I don’t know what’s going on there. The lighting is completely static. I don’t think that was intended. Also, there are some artifacts outside of the model (thin vertical lines).
In my eyes this is the future for rendering 3D graphics. If we did have a bit better support in the HW for this, we could easily be rendering ray traced object of real nature in real time…
Was it a too abstact demo or is nobody interested in IBR ?
Damn, your demos never work on my computer. It says that the file GDBASE.DLL is linked to a missing export-function KERNEL32.DLL:TryEnterCriticalSection. Well maybe it’s just my win95…
As for the IBR stuff, I’m interested, but quite sceptic too. The idea of content independent rendering speed is tempting and some of the images look impressive. But then you find out that they’re just small areas rendered with supercomputers, the model takes tons of memory, and when you look at the pictures again, you could almost always make a corresponding polygon model and it’d run in realtime with three year old hardware.
Rendering models as depthmaps could be effective, but the models would be hard to author, let alone animate. Besides, with displacement mapping of free form polygon models becoming supported in hardware, isn’t this a kind of a step back…?
Originally posted by ToolTech: Not much responce for this demo ?
I just tried it. All I could see was a flickering white shape on a blue background? The shape looked like two offset semicircles but it flickers quite rapidly. What was I supposed to see?
“I just tried it. All I could see was a flickering white shape on a blue background? The shape looked like two offset semicircles but it flickers quite rapidly.”
I can see it, but… it looks like a quad mapped with an image with normals in it, nothing more… shoild it be possible to rotate it or anythng like that?
Seemed to run OK on my PC (geforce 3 - AMD 2400+ XP - Detonator 40.72 - 512Meg Ram). Not sure if the output was correct though (prolly the old drivers - must update one day…)
The statue looked like it was textured with a normal map (ie. it was blue and pink etc.) and didn’t seem to have any depth to it.
I know the demo is not very good. You are supposed to se a 3D model. You can move around with the keys that i stated above. The trick is that I generate a volume out of depth maps.e.g shadow volumes out of shadow depth maps or volume render out of depth maps.
The demo requires ARB vertex program. The demo doen’t fall back or tells you if this is not present. I know this is bad.
The demo shaows a normal map. Have not yet fixed the software to do phong lighting so I just textured the model with the normal map.
the thinh is really that there are no polys, just depth map images and that the method can handle concave objects with continious layers. Not discrete layered images.
E.g. Take a camera + a laser range finder and input real time images + depth map from lrf. and you will be able to see a complete 3d world in real time.
Relief rendering with prewarped images is too slow yet and concave objects are tricky.
Layered images suffers from sampling artefacts unless you want to warp all images to screen orthogonal slices.
Traditional tesselation requires depth image analysis and perhaps edge detection , filtering etc. to create good closed volumes.
However a combination of all three above methods creates possibnilities to create a good solution…
e.g use warp pixel shader or CPU etc. to create seamless integration of “image layers”. use vertex program to create “depth” geometry that the image layers can be “glued on”. Use layer depths to sort the layer geometry. This is the trick i am doin.
I think that when uberbuffers or whatever they call them will be available this trick can be done with very high framerates. I have asked NVidia for this for a couple of years now when I created an algo to do stencil volume shadows with shadow maps. This way my guess is that a lot of “complex” scenes will be rendered much faster that traditional polygon modelled scenes…