In a vertex shader, after transforming by the MVP matrix - eg.
gl_Position = gl_ProjectionMatrix * cameraModelViewMatrix * gl_ModelViewMatrix * gl_Vertex;

(I have the camera transform separate from gl_ModelViewMatrix)

The values of gl_Position are mapped to a unit cube, correct? e.g. -1 < x,y,z < 1

I am trying to implement the effect of draping a texture over everything - basically like a projective texture or shadow map.

I thought I could use the x and y coords of
gl_lightPosition = lightProjectionMatrix * lightModelViewMatrix * gl_ModelViewMatrix * gl_Vertex

That’s the perspective division, which is neccesary, well, because we need perspective correction. The value output from the vertex shader is (x, y, z, w). Then you divide by w and get the normalized coordinates (x / w, y / w, z / w). x / w and y / w will be in [-1 … 1], as will z / w which is used in the depth test.

More precisely the depth test uses window space Z which is
(z/w)*(f-n)/2 + (n+f)/2
where n and f are the parameters do glDepthRange. One thing that D3D does better.

Unrelated, but a performance tip you should keep in mind:
Your simple line:
gl_Position = gl_ProjectionMatrix * cameraModelViewMatrix * gl_ModelViewMatrix * gl_Vertex;
will do two matrix * matrix and one matrix * vector multiplications which takes 16 + 16 + 4 instructions. Bad.

If you bracket the code so that it resolves to only matrix * vector multiplications like this:
gl_Position = gl_ProjectionMatrix * (cameraModelViewMatrix * (gl_ModelViewMatrix * gl_Vertex));
it will only take 4 + 4 + 4 instructions!
Saved 24 instructions in a single GLSL line. Nice, eh?

And no, a compiler can not do that automatically because the multiplication operator associativity is from left to right.

Better yet, calculate (gl_ProjectionMatrix * cameraModelViewMatrix * gl_ModelViewMatrix) on the CPU and pass the combined matrix as uniform to the shader.