I posted about a bit of a problem I was having a while back with opengl and my raytracer output differing a bit.
I wasn’t able to fix it, so I’m gonna ask a few questions, and see if i’ve missed something stupid.
- My input files are in maya obj from 3dsmax and deep exploration. Now these will be in object space when they are loaded right?
OpenGL then transforms them through the projection-modelview matrix stack to render them on the screen.
Now, I don’t transform them through anything at all.
When I fire rays at them (i use the opengl viewport parameters), will the rays miss them (or give weird output) if I don’t transform the models verts through the matrix transform pipeline before raycasting?
Something that might help is that when the camera is parallel to the z axis, the output looks almost correct, and it is most corrupted and warped when the camera is at a large angle.
Do I need to transform the verts through the transform pipeline?
The reason I ask, is that I was using GLUUnproject, which returns a pixel in screen coordinates into a world space position.
Can anyone help? I’m going crazy here.