once again i come here to humbly request your aid.
I am am working on a project where a complex model of a person has to be textured in real-time using the images from 6-8 cameras.
So far it looks quite good, the only thing left to do is to remove some artifacts. For instance, when the person holds up a hand in front of the body, the texture gets projected both on the hand as well as the body itself.
I am using projective texturing, so i already perform all the calculations to get the distance of a fragment from each camera. But, i cant use seperate shadow maps since i dont have enough texture units available.
I could use the alpha channel of the textures for depth information, but i cant figure out a good way to put the data there.
Is it possible to have a seperate depthbuffer for each camera so i can check for occlusion in the rendering pass?
Or maybe to output depth information directly into the alphachannel of my camera textures?