Texture Coordinates and Depth Buffer to obtain world coordinates

I am trying to figure out how I can get the correct values of texture coordinates and depth buffer to get world coordinates but I am having issues with how I can achieve this. My main question is how I can extract texture coordinates from my fragment shader (I know I need GLSL but I don’t know how I can use it with Visual Studio 2017 or what commands I need to implement it) and I need to find the depth buffer values (I tried using glReadPixels but can’t seem to find out how I can extract those values and use it).


	float depth_z = 0.0f;
	glReadBuffer(GL_FRONT);
	glReadPixels(x, y, w, h, GL_DEPTH_COMPONENT, GL_FLOAT, &depth_z);
	float *newdepth = &depth_z;
	float z = *newdepth * 2.0 - 1.0; //Convert depth value to NDC
	tmat4x4<float> projectionMatrixInv = glm::inverse(projectionMatrix);
	tmat4x4<float> fullTransformMatrixInv = glm::inverse(fullTransformMatrix);
	float b = gl_FragCoord.xy * 2.0 - 1.0; //Convert x, y values to NDC
	vec4 clipSpacePosition = vec4 (b, z, 1.0,float(w));
	vec4 viewSpacePosition = projectionMatrixInv * clipSpacePosition;


	// Perspective division
	viewSpacePosition /= viewSpacePosition.w;

	vec4 worldSpacePosition = fullTransformMatrixInv * viewSpacePosition;

	cout << "Visible points at: " << worldSpacePosition.x << " " << worldSpacePosition.y << " " << worldSpacePosition.z << std::endl;

Thank you for your time and help

Do you want to use the world-space position from within the fragment shader or from within application code?

If it’s the from within the fragment shader, do you want the position of the fragment being rendered or the fragment that’s in the depth buffer?

You can’t read the current depth buffer (the one which is being modified by rendering) from the fragment shader. To read a previously-rendered depth buffer from within the fragment shader, you need to first render to a framebuffer object with a depth-format texture bound as the depth attachment, then you can detach it from the framebuffer and bind it as a texture which can be read from the fragment shader.

If you want the depth of the fragment that’s currently being rendered, it’s available as gl_FragCoord.z.

If you want to use the position from within application code, you need to read the depth buffer e.g. with glReadPixels().

Thank you for your response. Yes, I want to be able to use the get the corresponding textures coordinates of depth buffer values. So I’m guessing that would mean I would need to use fragment in the depth buffer. How would I go about doing that from the above code, meaning what should I change in order to get the correct results? Thank your for your time and help.

[QUOTE=snkhan42;1291250]
How would I go about doing that from the above code, meaning what should I change in order to get the correct results? Thank your for your time and help.[/QUOTE]
Your code doesn’t make any sense. You’re using OpenGL and GLM functions, which can only be used in application code (which runs on the CPU), alongside gl_FragCoord, which can only be used in GLSL code (which runs on the GPU).

Firstly, I think you need to spend more time learning the fundamentals of OpenGL. Particularly about how shaders and GLSL are used. Once you’ve done that (and it’s going to take some time), if you still have difficulties, you’ll need to explain clearly what it is you’re trying to do.

Thank you for your response. I understand those two are different. I played the gl_FragCoord to give you an idea as to why I wanted. So here’s what I am wanting:

The idea of the project is to extract visible vertices seen by a virtual camera if it is zoomed into specific position of a surface (removing any hidden surfaces so as to give accuracy).

Through research, I found that the way to get this is by reconstructing the position using the Depth Buffer. Once I have the x,y (texture coordinates), and z (depth buffer value) coordinates, I should be able to reverse engineer my way back to world coordinates. The coordinates that I receive will be normalized but world coordinates would give me a better picture in 3D space. Therefore, it is required to have the end result in world coordinates.

In OpenGL, my main issue is coming in to find a way to get the texture coordinates and the depth buffer values and I am confused as to what I need to change in the above code to accomplish this task. I am sorry for the making it too broad.

Please let me know if you need further clarification.

Well, you don’t “get” the texture coordinates, you choose them. Given x and y coordinates in pixels, you can retrieve a depth value from the depth buffer (read with glReadPixels()). That gives you the value which gl_FragCoord.xyz would have had while rendering that fragment.

Those values are in window coordinates. The next step is to convert them to normalised device coordinates (NDC) using the viewport and depth range, i.e. the values passed to glViewport() and glDepthRange(). Convert them to clip coordinates by setting w=1 (you can’t determine the actual value of w used during rendering, as that is lost during projective division, but that doesn’t matter). Then transform them by the inverse of the model-view-projection matrix to get (homogeneous) object coordinates. Divide by w to get Euclidean object coordinates.

Which part of that are you having trouble with?

Thank you for your continuous help. I appreciate it. I am having trouble with choosing the x,y values and then retrieving the respective depth value. gl_FragCoord only exists in GLSL and I don’t know how to use it with my OpenGl program in visual studio 2017. I understand that GLSL is an extension of OpenGL but I don’t know what I need to do so I use gl_FragCoord.xyz to be able to convert back to world coordinates. Thank you for your time.