I really need you’re help. I’m working on a project which compares three methods of selecting (picking) objects on the screen using OpenGL. These are: OpenGL native selection, color-buffer coding and ray intersection, using triangle meshes loaded from files. Unfortunatly, I can’t get the ray-intersection code to work 100% once the primitves have been transformed.
I need to transform the pick ray (which consists of a origin point and direction vector) by the inverse of the current primitive’s transformation.Could someone show me how to do this using OpenGL under GLUT (no platform-dependant stuff please)? Each ‘object’ in my scene has both a transformation matirx and it’s inverse stored as part of the class, so it should be simple…
I’ve heard that gluUnProject() could be useful, but I haven’t had any success with that either.
I’m really stuck with this.
You need to transform the ray through the model matrix of the object you are intersecting. That means transforming both the point and the vector, the transforms for each are different.
I assume you mean that points are translated but vectors are not? All true, but getting the code to work is different matter…
Surely the routines are identical except that with the directional vector, you omit the last column of the matrix?
Yes, vectors are invariant under translation. ( w = 0 ) However, vectors are transformed by the inverse-transpose of the matrix. With an ortho-normal matrix the inverse is the transpose so you can just use the original transformation matrix itself.
When doing lighiting this has the counter-intuitive result that, if you don’t normalize your surface normals after you transform them ( i.e. glEnable( GL_NORMALIZE ) ) an object will get darker as you scale it up or brighter as you scale it down.
Obviously for max performance try to use ortho-normal matrices.
Ok thanks for the help so far. I’ve got the inverse transform code working. Here’s the situation:
I can add a sphere or triangle to the scene and do a basic raytrace ontop of the OpenGL render to see if the ray-intersect code is working. It does. I then rotate and/or scale the object. It still works.
However, as soon as I translate the same object, the raytraced version is offseted slightly; it does not appear on top of the original as it should.
As far as I can tell, the surface-intersect code is fine, but somewhere along the line the inverse-transformation of the ray get messed up.
I’m using the code from Hill’s Computer Graphics with OpenGL book have stuck to his code quite fathfully; my camera and object classes are very similar, and yet this behaviour is very odd.
Does anyone else have experiance with the raytrace code in this book who has suffered the same problem?
Ok, if anyone’s interested; I’ve fixed the problem. The inversely-transformed ray was being distorted by successive transformations as a result of the point-matrix multiplication.
Scales and rotate do not effect the translation component which is why they were working fine.
Solving the problem was simply a case of rebuilding a fresh ray after each transformation.
Is gluUnProject completely useless? It seems like it would handle all of this, but I can find no examples of it’s use.