Extracting visible vertices before normalized

Hi,

My task is to code in OpenGl to find visible vertices before they are normalized? Is there any way to do that? I have tried researching it but they all give normalized techniques (getting a value between 0 and 1). If I have a cube, how can I find 3D points (x,y,z) values that are visible to the camera? Thank you for your help.

Are you referring to viewport coordinates? You can program in viewport coordinates, but then must specify to opengl when using drawing commands that your vertices are not normalized, and therefore, that opengl should normalize the coordinates for you.

Jeff

I don’t do this, by the way, I’m probably a freak of nature, for programming in normalized device coordinates. lol

Jeff

i can’t believe it’s not possible to take a mesh with overlapping vertices and resize the vertices so there are no overlapping vertices, and the boundary of the object stays the same. ShowBox VidMate Mobdro

Visibility of vertices is never determined by OpenGL. Visibility only matters with regard to fragments, and that’s only through the use of a depth buffer.

Hey Alfonse,

I do need to use the z buffer/depth buffer, but I need to know how I can obtain visible vertices before they are normalized. Can you please guide me if there is anything you know about this?

[QUOTE=OceanJeff40;1290811]Are you referring to viewport coordinates? You can program in viewport coordinates, but then must specify to opengl when using drawing commands that your vertices are not normalized, and therefore, that opengl should normalize the coordinates for you.

Jeff[/QUOTE]

Hi,

I need the visible x,y,z coordinates in a general 3D coordinate frame, how would I go about doing that?

By the time you’re dealing with the depth buffer, you’re in the rasterization and Fragment Shader stages of rendering. There aren’t vertices in that phase; simply a list of fragments.

There is no way to know what vertices will be visible. Not without walking your entire scene graph and doing per-triangle visibility tests. Rasterizers are not built to do what you’re talking about.

Yes. But first let’s clarify your question:

1. Define “normalized”. My guess is you mean before the perspective divide has been done. Which means you are doing “is visible” tests in some space before NDC (e.g. clip space, or eye space).

2. Define “is visible”. By this I assume you mean it lies within the view frustum (the view volume that is potentially rendered to the screen).

3. And also, please confirm that you only want to perform visibility checks in a per-vertex basis. It’s worth confirming that because in the normal case of rendering triangles, it’s not that useful. You could for instance have a triangle where none of the vertices are “visible” but yet part of the interior of the triangle “is” visible.

4. One more thing. Is this a homework problem?

Hello,

1. I am looking for 3D coordinates (x,y,z values) that would be visible by the camera. I don’t want the program to give me x, y, and z values between 0 and 1. I need visible values from a general 3D coordinate system

2. Yes. In the view frustum of the camera

3. If I understood your question correctly, yes, I would need it in per-vertex basis. For example, if I have cube, I want to know what 3D vertices can be seen by the camera. So given the position and orientation of the camera, what surface vertices are visible to the camera?

4. It’s a small part of a research project.

I need visible values from a general 3D coordinate system

OpenGL does not work “from a general 3D coordinate system”. OpenGL only works relative to specific coordinate systems. Your shader code can transform from arbitrary user-defined spaces into the spaces OpenGL uses, or from OpenGL spaces into the spaces you use. But OpenGL itself only works within a specific set of spaces.

If I understood your question correctly, yes, I would need it in per-vertex basis. For example, if I have cube, I want to know what 3D vertices can be seen by the camera. So given the position and orientation of the camera, what surface vertices are visible to the camera?

This does not make sense. “Surfaces” do not have “vertices”; vertices are at the corners of a (triangular) surface. If you have a cube, all of the 8 vertices of the cube can be covered by other objects, yet you’ll still see some part of the surface.

The number of visible positions for any surface that is at all visible is infinite. You can use early fragment tests so that the fragment shader only gets executed for fragments that are visible. And you can use `gl_FragCoord` and the depth range/various matrices to compute the world-space position of that fragment.

But that would not give you ever “vertex” that is visible; it would only give you every visible rasterized fragment’s position.

So do you need this on a per-vertex basis (the corners of the surface) or across the surface?

Hello Alfonse,

Thank you for your reply. I apologize for the confusion. I would need it across the surface. So basically we need to create a 3D model from the extraction of a 2D surface. So for example, if a camera is overlooking a table, and I want to focus it on a certain position, I want to know what are the coordinates at that certain position that the camera sees? So it only shows the specific coordinates of that area and nothing behind or around it. I hope this clears the question up.

Thanks again for your time and help.

Ok, then it’s pretty simple. You can do this “is visible test” in any space, but it’s perhaps simplest in “CLIP-SPACE”. Just take your OBJECT-SPACE input point, multiply by MVP (modelingviewingprojection) to transform it to CLIP-SPACE, and then determine if:

``````
-w <= x,y,z <= +w

``````

(where (x,y,z,w) is the point in CLIP-SPACE).

You know this has to be true because, after the perspective divide, your NDC-SPACE point must satisfy:

``````
-1 <= x,y,z <= +1

``````

Alternatively, you can do the “is visible” test in EYE-SPACE by testing the point against the 6 plane equations of the frustum bounds (for example). This is pretty cheap as it only involves a few dot products, one per plane equation.

Well, if I didn’t get it wrong, the technique you are looking for is called “frustum culling”. You can google that and should get a lot of good tutorials. Dark Photon already provided a possible approach, but I am not sure if you know enough about graphics programming to understand his answer.

Thanks for your help. I recently talked to my professor as he wants me to use Z-buffer and hidden surface removal techniques to calculate the visible vertices and indices by the camera. Can you guide me to any resources or could you guide me as to how I should approach this?

After projecting the points to NDC and performing frustum culling, transform them to window space (using the viewport and depth range), use the window-space X/Y coordinates to perform a lookup in the depth buffer and compare the window-space Z against the retrieved depth value.

https://www.gamasutra.com/view/feature/131801/occlusion_culling_algorithms.php

Without rereading the details, what you basically do is dividing your screen into several subsections. Then you sort your objects from close to far and transform your data into an appropriate coordinate system. Afterwards you iterate over those objects and check in which screen subsections they occur. For the affected subsections you first check the current z-buffer. If the current z-buffer depth is closer to the camera than the objects depth, you can say, that the object is occluded in this section. If it is occluded in all subsections, you can drop it during this render pass. In the other case, where the object is closer to the camera than the current subsection z-buffer-value in at least one subsection, you need to update the affected sections z-buffer and have to draw the object during the render pass, since it is not fully occluded.
If you use several z buffer instances with different resolutions, you can optimize the process. For example: Use a quad tree where each element stores the farthest value of it’s 4 subelements. This way you might reduce the necessary number of subsection checks.

However, all I said is just what I memorized. So there might be some incorrect/suboptimal statements. For details, have look at the article.

That’s the theory. But it’s probably overkill for just doing visible vertices checks.

In OpenGL, you’ve got several options to do these Z-buffer-based occlusion tests. Either use Occlusion Queries, or bind your depth texture to a shader and do direct lookups into the depth texture to compare depth values in the shader. With the former, you basically get the pipeline to tell you whether there’s any occlusion when you render a specific primitive or set of primitives you pass down the pipe. And with the latter you have to determine that yourself in the shader.

Which one works best depends on where you need that “is occluded” result (CPU or GPU) and the details of your application. If you’re just doing point-based occlusion tests for individual vertices, then either method is pretty simple. And if your app isn’t performance-critical, feel free to chose either. However, if your app is very performance-critical, queries have to be handled with care to avoid pipeline bubbles (i.e. wasted CPU/GPU time) due to the CPU-to-GPU readback. That said, it is possible to either hide the readback latency or to completely avoid that readback when needed.

For examples of doing the former, just websearch for “opengl occlusion query tutorial” or “GL_SAMPLES_PASSED tutorial”. One of a number of hits: Occlusion Query (Conner) This is a basic implementation that will generate a pipeline bubble. However, if your app isn’t performance-critical, you may not care.

Thank you so much for your continuous help. Based on the links you posted, I think I was able to implement the Occlusion query and was able to get number of samples. I have a couple questions though, if you don’t mind.

1. I am testing this query on a cube and I am getting values in the thousands for number of samples, is that normal to get, as I would assume there were be many primitive/triangles I could draw from a cube? And from what I understand, do those sample values mean how many points are visible at a specific camera location?

2. How can I get x,y,z values from these samples? Would I use glReadPixels or is there another way to get the coordinates before normalization occurs?

A number in the thousands isn’t unusual, no. You’re counting pixels (or sub-pixel samples) here, so even for small “test primitives”, you’re potentially going to cover quite a few. For example, a 32x32 pixel square may cover over 1000 pixels even with no MSAA. And with 8x MSAA, an 12x12 pixel square could cover that many subsamples.

And from what I understand, do those sample values mean how many points are visible at a specific camera location?

Intuitively, yes. More precisely, the returned sample count indicates how many of the pixels (or samples) covered by the test primitives you rasterized actually passed the depth test.

For example, imagine you were to render (rasterize) a triangle or quad with depth test and depth writes off, and in that case it would cover N pixels (or samples, if MSAA is enabled). Further suppose that if you were to rasterize it with depth test ON, half of the N pixels (or samples) would be occluded by some closer pixels in the depth buffer. Then you should expect that the sample count returned from your SAMPLES_PASSED query should be about 0.5 * N pixels (or samples).

1. How can I get x,y,z values from these samples? Would I use glReadPixels or is there another way to get the coordinates before normalization occurs?

You don’t get coordinates back with an occlusion query, just the aggregate statistic.

If you want the 3D coordinates of your test primitive, just transform them by ModelViewProj and do the perspective divide. If on the other hand you want the 3D coordinates of actual samples in the depth buffer, then you’re going to want to use the other approach (read depth values from a depth texture in a shader) and reconstruct position-from-depth.