How does effect zbuffer?

I am making a custom projection system, and I am writing directly to gl_position.xyzw in vertex shader instead of using traditional viewprojection matrix.

I am currently setting the w parameter to 1.0.

Which value z, or w is used to fill the zbuffer? D
o I break the zbuffer by setting w to 1.0?

Neither. The depth value after projection and perspective divide, mapped to [0,1] is used.

By manually setting w to one, you only express that no projection at all should take place. In general that’s a bad thing. But there’s also times when you provide normalized device coordinates directly in which case a w of 1 is absolutely necessary to preserve the coordinates for the viewport mapping. One classic example of that is a screen aligned quad.

Well, to be more precise: gl_position contains the homogeneous coordinates of the already projected vertex, there’s no further perspective transformation in the GL pipeline. The “perspective divide” is IMHO a bad misnomer, it is unrelated to the perspective projection, it is just needed for the normalization of the homogeneous coordinates to get Cartesian coordinates.

The final value written to the Z-buffer is then remapped linearly from [-1…1] to the GL depth range (usually [0…1]), so the GLSL formula would be:

mix(glDepthRange.near, glDepthRange.far, 0.5 * gl_position.z / gl_position.w + 0.5)

By setting w to 1.0 you will break perspective correct interpolation, because varyings are interpolated using 1/w for the perspective correction. Other than that there should be no problem with setting w to 1.0.

I wouldn’t go that far. Typically eye-space .w == 1 (standard 3D). Of the two main projections (orthographic and perspective), perspective is the only one that populates clip-space .w with something besides eye-space .w, making use of the perspective divide in the position pipe. See OpenGL Projection Matrix (Song Ho Ahn) That said, if your crafty you could use the perspective divide for something else, but that’s not the typical use case.

Maybe it’s not a bad name per-se, but I think it is confusing when you read a math book on homogeneous coordinates and projective geometry to understand the math behind the OpenGL transforms.

I think the name “perspective divide” suggests, that this division is part of the perspective transformation, but math books explain that multiplication of the coordinates by a non-zero factor (like 1/w) doesn’t change the represented point, so it isn’t really a transformation. All math books I read call this operation “normalization”, and considering that the result is called normalized (device) coordinate, I’d say that would have been the better name.