So, I’ve been working with OpenGL for almost a year now and have managed to avoid one stupid thing this entire time, admittedly on purpose.
I have an X,Y, and Z value, without rotation and translation.
I rotate and translate with calls to glRotate and glTranslate. (the translation is not an issue)
Now, I need to find out where that vector now sits for the purpose of collision detection.
I have looked around for literally dozens of hours for the solution to this issue, and only ever found pieces of what I need.

I would be more than happy to provide more information on the specifics of what I am trying to figure out, since I am pretty sure this original message will not make much sense. (it was written at 1:30AM, after a 4 hour googling binge)

I am not really sure if this is the correct way to go about doing it, but as all these rotations and translations basically update the modelview matrix what you would need is to first obtain the modelview matrix by something like:

Thanks! I knew I had to do something like that, but I thought I was only supposed to use part of the matrix…
I’ll give it a shot in a bit and return with results.

you only include pointw in your calculations if you are dealing with 4-component vectors (or homogenious).
For direction vectors, the w component is 0, thus eliminating the last part of your calculation. For position vectors, the w component =1; thus it is used in the calcuations (as you have done).
Now, if we are talking GL rotation vector X,Y,Z on the CPU, then there is no W component (or assume W=0). This has the effect that when multiplied by a Matrix, the last column is set to zero (the translation part of the matrix). If the rotation vector had w=1, the effect of a matrix multiplication is that the last column is used in the calculation and the traslation is set.

This is why fixed-function lighting used 0 or 1 to control the position of the light as the position vector (x,y,z, 0|1) is multiplied against the ModelView matrix. If w=1, the light’s position is included in the calculation.

There has to be SOME way to get the transformed, scaled and rotated coordinates.

In general, the way this normally works is that the physics system decides where things are (collision detection being part of physics) and how they’re oriented. You pass that information along to OpenGL when you render that object. So most people simply have no need to do what you’re talking about.

In any case, you don’t say what space you want these “transformed, scaled and rotated coordinates” in. I’m guessing world-space, which is why simply using GL_MODELVIEW isn’t helping. That matrix transforms to camera space, not world-space.

In that case, what you need to do is stop relying on OpenGL’s matrix functions and do it yourself. You need to build a model-to-world matrix separately from your world-to-camera (which I imagine you build with gluLookAt). Then, when you want to render an object, you push your model-to-world matrix onto the OpenGL stack with glMultMatrix.