Eye space lighting problem with Ortho projection..

Im doing deferred lighting in eye space, when using a perspective projection, everything is fine.
I have a wall of objects aligned with the XY plane that dont exist in front the origin on the +Z. I have two lights, one spot, one point, that are slightly down the +Z (out of screen). The Camera is down the +Z, well past the lights. So the order is, looking along the -Z, camera(+Z) -> lights(-X/+X,+Z) -> (origin) -> wall(-Z).
When viewing this scene with a perspective projection, the lights show up in the correct place, down -X and the other down the +X on the right, and the light hits the wall correctly.
I also have a orthographic camera, same scene (switching between perspective and ortho to try and figure this out), but the camera is at approx. 95.0 down the +Z and near far are set to 1.0 and 150.0, so the far plane sits at about 50 units behind the wall, at about -50.0 down Z.
The problem is with this orthographic camera, the lighting is only correct when I move the camera toward the origin. It starts looking pretty correct at about +Z = 2.5 or 3.0, but the lights look like they are behind the wall with the camera at +Z = 96.0.
Im using glOrtho passing near = 1, far = 150, not Ortho2D, the scene will have varying z values later, not between -1 and 1, which i figured glOrtho would handle.
The light world position is transformed by the cameras view matrix, before passing the light position to the shader. This is working perfectly for perspective, but seems to be pushing the lights behind the wall when using an orthographic projection (the lights seem to move toward the camera, from behind the wall as the camera moves towards the wall).
Is there something im overlooking? Does the view matrix not have the same effect in eye space with an orthographic projection?

Do you store eye-space positions in your buffers, or are you computing them from the fragment coordinate and the depth buffer value?

Computing from non-linear depth buffer.

Edit:

    // Uniforms:
    // widthInv, heightInv, are screen res
    // depth, is from 24bit depth texture
    // top, right, near, far are project values

vec2 ndc;
ndc.x = ((gl_FragCoord.x * widthInv) - 0.5) * 2.0;
ndc.y = ((gl_FragCoord.y * heightInv) - 0.5) * 2.0;

vec3 eye;
eye.z = near * far / ((depth * (far - near)) - far);
eye.x = (-ndc.x * eye.z) * right/near;
eye.y = (-ndc.y * eye.z) * top/near;

Haven’t sifted your math super-carefully, but at first glance, the -eye.z/near scale factor in eye.x and eye.y doesn’t apply in orthographic, just perspective.

Would go grab the GL ortho projection matrix and crank through it.

Alright, been playing around with this, thought rebuilding position from a perspective depth buffer would be enough, and I had this subject forgot about for a while, but, of course more is always needed. Thanks for the clues previous posters.
This is what I now have for rebuilding position from a symmetrical orthographically projected non-linear depth buffer.

vec3 eye;
eye.z = -near - (depth * (far - near));
eye.y = ndc.y * top;
eye.x = ndc.x * right;

can anyone see anything wrong with that? It seems to work, just hope Im not missing something and dont run into any surprises.