I have a paper, which writes “…render the 3D points and normals into texture memory on the GPU…”. I am trying to build a program which can handle reflections on curved objects. This should be hybrid solution with GLSL and CUDA.
With a shader I try to copy the 3D-coordinates of the reflector to a pbo, map the pbo to CUDA and perform my calculations.
This is really quite simple, but the coordinates are normalized, interpolated or clamped to one. How can I avoid that the step from vertex to fragment shader doesn’t interpolate my data? Or how can I calculate world coordinates from my fragment output? I really need the output as world coordinates to perform calculations with CUDA…
I hope anybody can help me or give me some new ideas or tips…
To avoid clamping use a floating point rendertarget (texture). Interpolation cannot be disable. Why don’t yuo do the vertex transformation with CUDA (or on CPU) in the first place? All you need it transforming the position by the modelview matrix if you want world-space coords.
The problem is that I can’t use such a rendertarget like textures with CUDA. The examples of the CUDA SDK show that they use only pixelbuffer objects as rendertargets and map them to CUDA.
Why don’t yuo do the vertex transformation with CUDA (or on CPU) in the first place?
That I have almost tried, but I need the coordinates “per pixel”. I am searching for the right reflection position. That’s why I use the fragment shader to give back all coordinates. Or do you mean when I have all coordinates of all reflector vertices that I could multiply the vertices with the modelview matrix and interpolate in CUDA? Would be that the same effect?