tl;dr - Have an old inefficient algorithm. Looking to update with something newer such as adaptive mesh refinement or surface subdivision. I have scientific data in the form of a 5000x3000 texture that I’d like to use as a displacement map on the mesh. Can someone comment on modern techniques to accomplish this? I have newer hardware; nVidia FX cards.
I’m investigating techniques to improve an existing application. The current app renders a texture of around 5000x3000 onto a mesh of vertices. I use this texture as a simple height map. The main problem with the current technique is a high vertex count in some parts of the mesh yet lacking sufficient overall resolution to adequately visualize the details of the texture. I simply don’t have enough geometry in some spots to really do the data justice and far too much detail in other areas where it’s not needed. I also need to light the mesh, so I need normals with the new technique. Something I don’t currently support.
A simple way to understand the application is to compare it to a terrain rendering algorithm. My data is scientific in nature, but is similar to a height-map. The texture is of a fixed size (often times ~5000x3000). The output resembles terrain. Colours are a bit different, but you get the idea.
My current technique involves a series of patches that form a larger overall mesh. Each patch contains several levels of detail. There’s a lot of unused geometry floating around on the hardware that I simply don’t ever use. I’m looking for something much more efficient in terms of performance and memory. Something that dynamically adds detail to a portion of the mesh based upon some metric; perhaps camera distance, etc.
I’ve been looking at different techniques but I’m not quite sure what’s best. My hardware is of the newer nVidia FX class; FX4800 and above. I’ve considered an adaptive mesh technique similar to http://http.developer.nvidia.com/GPUGems3/gpugems3_ch05.html - I’ve also looked at surface subdivision as found here http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter07.html - I’m leaning towards the adaptive mesh refinement.
Essentially what I’d like to do is to define a low-resolution mesh and be able to add detail to the portions that are near the camera using the GPU. And of course I’d like to use my large texture as a displacement map. As I mentioned previously I need to light this as well.
Any guidance you can provide would be appreciated. I’m hoping to hear back from the community as to what works, what doesn’t and so on.