I have a fullscreen quad going from -1 to 1 in the X and Y axes, but staying in z=0. In the vertex shader I simply do…

gl_Position = vec4(inPos,1);

…so that it gets rasterized onto the full screen.

Now I have a view matrix and a projection matrix sitting in my vertex shader. I want to calculate the ray origins and directions for the 4 corners of my fullscreen quad and output them to the fragment shader so that they get interpolated automatically, which should work.

I know my eye is at

view_mat * vec4(0,0,0,1) // world coordinates

But how about my quad’s vertices? My projection is done with…

glm::perspectiveFov(45,1280,720,0.01,100.0)

…so if I’m correct, the frustum is centred with the near clip plane as -640<x<640, -360<y<360 and z=0.01 (in view space). However, after applying the view and projection matrices, the near clip is -1<x<1, -1<y<1 and z=-1. So I could probably get the world coordinates of the quad’s vertices (and the ray dir) by doing:

Why are you bringing the projection matrix into it?

I would assume that you want the eye position and quad vertices in the same coordinate system. But the eye position isn’t meaningful in normalised device coordinates (the Z coordinate will be negative infinity). It will be representable in clip coordinates, but the W component will be zero (specifically, the eye position in clip coordinates is [0 0 -1 0] regardless of the projection matrix).

In normalised device coordinates, the normalised direction vector from the eye to any vertex (or indeed to any position in space) will always be [0, 0, 1]. There’s no need for calculations.

So you are telling me to use (0,0,1) as my ray direction and then (-1,-1,0) (-1,1,0) (1,-1,0) and (1,1,0) for the corner pixels (used for interpolation)? But then how would I do ray intersection with a bunch of spheres? Should I work in world-space coordinates? I’m trying to find the world-space coordinates for each of those 4 vertices and the direction of the eye ray that goes through those vertices.

OpenGL doesn’t have “world-space”. It has object coordinates (the raw coordinates passed to glVertex, glVertexPointer etc), eye coordinates (after the model-view matrix has been applied), clip coordinates (after the projection matrix has been applied), normalized device coordinates (after division by W) and window coordinates (after the viewport and depth-range transformations).

You probably want to work in object-space, as as this makes the defining equations simpler (e.g. you can model any ellipsoid using the equation for a unit sphere, although you need to be careful about reflections in the case of non-uniform scaling). Although eye-space may be more suitable if you’re trying to ray-trace multiple objects at once.

The way I would approach it would be to pass 2D unit coordinates ((-1,-1) … (1,1)) as texture coordinates for the vertices. These are the X and Y components of the ray’s position in normalised device coordinates, where its direction is (0,0,1). Remember, in normalised device coordinates the view frustum is always the unit cube. Rays are parallel to the Z axis, and the points where the ray intersects the near and far planes are (X,Y,-1) and (X,Y,1) respectively.

In the vertex shader, take the homogeneous points (X,Y,-1,1) and (X,Y,1,1) and transform them via gl_ProjectionMatrixInverse for eye-space coordinates or gl_ModelViewProjectionMatrixInverse for object-space coordinates (or their equivalents if you’re using user-defined uniforms rather than compatibility uniforms). Store the transformed coordinates in variables which are interpolated and passed to the fragment shader.

Within the fragment shader, the variables hold the eye-space or object-space coordinates where the ray for that fragment intersects the near and far planes. The parametric equation for the array is given by linear interpolation between the two points. Note that whether you divide by W before or after interpolating affects whether the interpolant corresponds to linear depth or reciprocal depth (as with depth buffer values).

This approach has the advantage of taking the relevant matrices into account, so the results will be consistent with conventional (forward) rendering using those matrices. It will work with either orthographic or perspective projections, whereas assuming that the rays converge on an “eye position” will only work with a perspective projection. It will work with off-centre or non-aspect-preserving projections, whereas using fixed eye-space vertices won’t.

GCElements, thanks you so for detailed explenation! I spent several days for searching the solution. Finally I get the code and checked it in VR, so matrix inversion method works perfectly!
I published my codes “Computing ray origin and direction from Model View Projection matrices for raymarching” article.