I’m trying to render dynamic cubemaps to do reflections in my app. So far I’ve got the actual rendering working but I havent been able to get the cubemap to map correctly on the reflective object. In my app I keep the camera and object transform matrixes seperate. (they multiply to the modelview though) And I’m rendering the cubemap in worldspace so the same map is generated regardless of camera position. If i use the normal transformed by the transposed inverse of the world matrix, the worldspace normal (right?) I get what appears to be correct results on a sphere, but this obviously isn’t the correct way of doing it. I tried taking the vector from the worldspace eye position to the worldspace vertex and reflecting it using the worldspace normal in my shader but that just produced a crazy distored image. I’ve verifyed that the worldspace eye position is correct and my cubemap texture renders correctly as a cross I’m pretty sure it’s right. Is this even the right approach?
Yes, transform normals with the inverse transpose of the matrix that transforms points.
Know what space your vectors are in, and ensure that all math is carried out in that one space only, be it eye or world. If your cubemap is defined in world-space, then be sure all your vectors are in world-space. If instead your vectors happen to be in eye-space, then you can do the reflection in eye-space, while transforming the resulting vector into world-space.
Assuming you understand the math of reflection, the rest is really just a matter of consistency in spaces, and making sure your implementation is actually doing what you think it is.