Near plane quad

How would I go about generating a quad size of the screen?

I thought about computing the vertices straight in world space:

L - Lookat
E - Eye
N - (L-E)/||L-E||

and plane of quad is:

P(x,y,z) = Nx(x-Ex) + Ny(y-Ey) + Nz(z-Ez) = 0;

I used P(Ex + NEARx + Dx, Ey + NEARy + Dy, z) and solved for z for 4 corners. Two problems with that:

  1. It actually skews and distorts as I move the camera, so looks like there is something wrong with it.
  2. I can’t get Dx and Dy (distances from Eye to the edges of the screen in world).

And plus it’s too complicated I’m sure there is an elegant way. Any suggestions?

Thanks

What do you want this Quad for?
Is it something like an overlay for a HUD?

If so then yes there is a very elegant solution:

Make an ortho matrix the width and height of your screen for your projection matrix, and load the modelview matrix with the identity.

Then draw a quad of screen dimensions at Z = 0.0, or simply using 2f coords.

If you are texturing the Quad then make an ortho matrix for your texture which uses the width and height of the texture, and use TexCoords that run from 0.0 - 1.0 etc.

In the camera space your quad can be constructed easily:
Z is equal to -Znear for each vertex
X,Y are one-sized quad multiplied by (extentX,extentY)

Then you can transform the coordinates to any other space.

Ah yes of course.
I don’t want to use ortho projection and I’m aware of that technique. I am coding up a volume renderer on gpu and using the world origins of the pixels as the origins of rays for raycasting process. Normally I render a bounding box and use the pixels on the surface to generate rays but this doesn’t work when camera is inside the bounding box and what good is a volume renderer if you can’t see inside? :slight_smile:

How do you get the ‘extent’ values? And can you remind me what inverse projections I need to get into world space from eye?

Thanks


//fov = camera's field of view in radians
ext_x = near_distance * tan(fov/2)
ext_y = ext_x * camera_aspect

Assuming you have a camera attached to a spatial node, it’s world transformation (Tn)(Tn-1)(Tn-2)(T0) will be the exact what you need.

I should actually keep a transformation but unfortunately I just update the eye and lookat position through glu

Even if you don’t keep it, you can always ask OpenGL about the current matrix:


Matrix m_view,m_proj;
glGetFloatv(GL_MODELVIEW_MATRIX,  m_view);
glGetFloatv(GL_PROJECTION_MATRIX, m_proj);

Hm, shouldn’t it be as simple as:


Mat4 vp = gl_ViewProjection; // get it somehow
Mat4 ivp = inverse(vp);

vec3 p0 = ivp * vec3(-1,-1,0); // bottom left
vec3 p1 = ivp * vec3( 1,-1,0); // bottom right
vec3 p2 = ivp * vec3(-1, 1,0); // top left
vec3 p3 = ivp * vec3( 1, 1,0); // top right

...
vec3& operator*(Mat4 m,vec3 v){ // helping proc
   vec4 v4=vec4(v,1.0);
   vec4 o = m * v4;
   return vec3(o.x,o.y,o.z);
}

I tried both methods and both gives me some weird, deforming quad that seems to be fixed in world space.

float extX = 1.f * tanf( RAD( 45.f / 2.f ) );
float extY = extX * winSize_.x / winSize_.y;

glMatrixMode( GL_PROJECTION_MATRIX );
Mat44 proj, view, viewProj;

glGetFloatv( GL_PROJECTION_MATRIX, reinterpret_cast<float*>( &proj ) );
glGetFloatv( GL_MODELVIEW_MATRIX, reinterpret_cast<float*>( &view ) );
viewProj = (view*proj).inverse();

Vec4D v1 = viewProj * Vec4D( -extX, -extY, -1.f );
Vec4D v2 = viewProj * Vec4D( extX, -extY, -1.f );
Vec4D v3 = viewProj * Vec4D( extX, extY, -1.f );
Vec4D v4 = viewProj * Vec4D( -extX, extY, -1.f );

Of course it’ll be deforming. You MUST keep your extX=1.0 and extY=1.0

Look at it like this: you’re transforming from clipspace to worldspace. Clipspace is [-1;1] ranges. The viewport aspect-ratio and FOV are taken care of by the projection-matrix already.
Nice that you noticed my z=0 example is incorrect, and should be z=-1 :).
Ah, and it should be “projview" instead of "viewproj”

Hmm… got it by taking the inverse of transpose of view and proj matrices as in

viewProj = (view.transpose() * proj.transpose()).inverse();

And using [1,1] for x,y. My structures are column major so don’t know why this worked and not the other way around. The flickering though…I have no idea how to fix it. An epsilon is no good.

It depends on the maths-lib you use, I guess.

Ilian’s method is almost right :slight_smile:
To get the nearplane quad in worldspace, compute the inverse of Projection*View and transform these 4 NDC corners of the nearplane by that. Finally divide the resulting vectors by w.

So it looks like

Matrix4 mvp_inv=(ProjectionView).inverse();
Vec4 lowerleft=mvp_inv
vec4(-1, -1, -1, 1);
Vec4 lowerright=mvp_invvec4(1, -1, -1, 1);
Vec4 upperright=mvp_inv
vec4(1, 1, -1, 1);
Vec4 upperleft=mvp_inv*vec4(-1, 1, -1, 1);

lowerleft/=lowerleft.w;
lowerright/=lowerright.w;
upperright/=upperright.w;
upperleft/=upperleft.w;

Sorry for resurrecting this post but I just had the same problem, and wanted to say that the solution proposed last by skynet worked as a charm. DmitryM’s solution worked well for me as well, but I had some precision problems with the plane not quite filling the entire viewport for some reason.

Anyway, for skynets solution I set up vertices with z=-1, w=1, and x and y at -1 and 1, as instructed. In glm, I used glm::mat4 m = glm::inverse(p*v). mvp then becomes the identity matrix, so in the vertex shader I did gl_Position = pos_in.

In the fragment shader I needed the world coordinate. The world coordinate becomes vec4 worldPos = M*pos; worldPos /= worldPos.w. All working now, thank you all for your help!