Im developing some presentation which is created with prerendered image and some 3d object. I want to have perfect 2d and 3d compositing. In pipeline we use Maya and mental ray. From Maya, we render image in 2 buffers:
- rgba color (*.ct)
- float32 depth (*.zt)
Now I have trouble to match depth values between depth buffer from *.zt file and opengl depth buffer. It seems that mental ray doesnt respect near and far camera settings, because some values goes out of this near-far range.
Im using following math to convert zt values to range 0…1, but it doesnt work properly:
mydepth = (1.0f - DepthNear/zt_file_depth) * DepthFar / (DepthFar-DepthNear);
Any toughts, suggestion, or hint how to solve this problem?