Number of pixels in a fragment - possible to calc?

I thought about that, but how would that work? I mean… the fragment shader is constantly rendering… well actually I guess that makes sense, it’ll be based on the display function… sorry for my slowness, the code for the CPU side is really complicated too :slight_smile: But I think I might have the idea to make this work now…

Well… now that I’ve played around with the equations a bit, I still can’t come up with a way to derive the area / pixel from that, as the fragment shader doesn’t know how many pixels / the object actually is, so even if I have the area, I don’t really have much way to divvy that up between the existing pixels. I can derive an estimate of how many pixels I think an object SHOULD take up at a given distance D using the equation:

est#ofpix = (total_srfcarea_of_obj) / ((0.004 * range)^2)

(example: 400m / (0.004 * 100m)^2 = 400m / 0.16 = 2500 px => 50x50)

But I can’t even see how to apply that… sometimes the number of actual pixels is greater than the number of estimated, sometimes it’s less than the estimated. If it was always one or the other I could possibly just take the ceiling or the floor and work with that… but sometimes it’s not just off by fractional amounts, it’s sometimes off by a full 1 pixel, so I’m not even sure about that anymore.

Man. I’m beyond lost as to where to go at this point. Who would have thought pixels were so complicated?

Is there a explicit need to calculate the area per fragment? Can’t you do the calculation per object and store the total area of the object in a texture? This way you can render each object as one pixel, read the real the surface area from a lookup texture (or from a uniform buffer) and do you calculation using that area.

Whether this is an option depends on what kind of calculation you need. If the calculation is uniform over the area of the objects this should be possible I guess. However, if the calculation depends on the location on the object it is another matter.

Otherwise, I’m a bit out of ideas because it really depends on what calculation you actually want. You might even look into OpenCL, which might suit your purposes better.

Couldn’t you use occlusion queries for this? Start an occlusion query, render the object, get the number of samples (which, afaik, equals to fragments if multi/super sampling is off). Then re-render object with the fancy area-needing shader, passing the results from the query as a uniform or something.

Unfortunately, yes, we need (as far as I can determine) the area per pixel. I never imagined getting it was going to be so difficult. haha. Though being able to pass in the surface area of the object as a uniform could be valuable. I just haven’t figured out how to do the math from there.

The reason for needing it per pixel has to do with needing to determine the amount of light reaching virtually every point on an object (or, as I’ll explain, as close a facsimile as we can represent). The light has a Gaussian distribution, so even for a completely flat object facing the viewer (like in the pictures I posted before), at the same depth value the amount of light reaching each pixel is going to vary slightly, depending on how much you move away from the center of the quad. (ie. at 100m, moving left or right from the center of the square, the amount of light reaching the pixel varies.) Then on top of that, the amount of light varies OVER each individual pixel: if each pixel is 10m wide, and the pixel is to the right of center of the quad, the amount of light reaching the right side of that pixel is LESS than the left side of that same pixel. Because there isn’t anything smaller that we can break a pixel into, we simply integrate the values over the area of that pixel. So we have to know the area of that pixel. (This is all according to my physicist I’m paired with, who’s roughly 1.2 thousand times smarter than I am, so if that description didn’t make sense, it’s because I didn’t follow him right. Hahaha.)

So we can’t do it over the area of the entire object, we have to do it by pixels… obviously if the object is only one pixel, that will cover the entire area of the object, but without knowing the number of pixels composing the object itself, it’s looking pretty difficult to determine the area per pixel because of that discontinuity in the trend as an object moves toward the edge of it’s visibility point leading to it being displayed as the same size over a significant distance. If I continue to calculate the area per pixel using a linear trend depending on distance from the camera, when I reach the distance where it begins to repeat the size 2px x 2px, it will greatly inflate the area per pixel over what it actually should represent. I don’t really know how to get around this and I’m with you, I’m rapidly running out of ideas.

You mentioned OpenCL… how might that help? I’ve heard of it before but I’ve never really looked into it, I’d heard it was parallel-programming oriented. I’ve looked at CUDA before, but I don’t see where that would help here (other than potentially speeding up the computation times, which is something we’ve actually talked about implementing in the long run once we start getting a more realistic model working).

deep breath

I… have a very limited grasp of what you just said, but it sounded promising. If you can’t tell, I’m pretty novice-level at this kind of stuff so I’m struggling a bit. This is the best I could gather from what you said and from looking around online, you might have to dumb it down for me a bit though… :stuck_out_tongue:

An occulsion query something like (NV_occlusion_query?) gets the number of pixels displayed for an object after it’s been drawn…? Then after it’s been rendered, render it again and pass in the surface area and the number of pixels rendered? Assuming I’m understanding this right, it does sound a bit promising, although rendering twice sounds like it would cause a bit of a performance hit, wouldn’t it? Would it be a noticeable one? I’ve never even heard of occulsion_querying before so I don’t know a lot about this

I suggest you read the specifications, it’s section 4.1.7 and 6.1.12 in the OpenGL 2.1 specs. To quote the relevant part:

Also check out the extension text, it contains some more useful info and sample code: http://www.opengl.org/registry/specs/ARB/occlusion_query.txt

So, what I was thinking was:

  • Start a query.
  • Render the current object in such a way that the depth test is always passed for visible fragments from that object. This might be slightly tricky. At this point you don’t need lighting or any fancy fragment shaders, as it’s the depth test that’ll count. So disable writing of color.
  • End the query, and retrieve the results. This should now be the number of visible pixels.
  • Bind the “fancy” shader, set the appropriate uniform to the visible pixel count you just got.
  • Render object again.

Now ideally some of these steps should interlaced with the rendering of other objects, as it takes some time to get the results back from a query after it’s ended. Depending on how exactly you’re rendering stuff, you could perform a query for object N, do the “proper” render of object N-1, then get the result from the query for object N.

I’m not well versed in these things, but I think this should work.

>> (example: 400m / (0.004 * 100m)^2 = 400m / 0.16 = 2500 px => 50x50)

Double check your dimensional analysis on this. I’m getting units per meter, which is like velocity or something.

Well… it looks to me like in the end its just a matter of precision. When your objects reach pixel size, there is just no way of telling how much area covers a pixel (well… you can go a small step further by taking samples of a pixel, but still that is just postponing the limit).

The rasterizer just has a limited precision. If you want more precision you just have to do the calculation on a magnified viewport. Perhaps you can do the calculation per object on a viewport of say 100x100 pixels, no matter how far away or how large the object is. The results of these calculations you store in a lookup texture for example (downsample the per object calculation outcomes to the size the object would actually be on the final image).

Of course this will take a lot more calculation time. Perhaps you could use the above described method to only store the area per pixel for each pixel in the final image. That way you have access to these areas and can do the calculation on the final image with the appropriate areas.

I’m afraid I can’t help you any further with this problem. The rasterizer does just: check which pixels (or samples) should be filled and which don’t. Because pixels (and samples) have a fixed size, there is a limited precision and when objects become smaller, the rasterizer sometime has to round objects to a single pixel, or rounds them to zero pixels.

As far as my suggestion for OpenCL: I havent looked into OpenCL myself. It is just that I imagine that OpenCL is more like a traditional way of calculating things, but in parallel instead. With OpenGL you are bound to the rasterizer that gives you all kind of problems. OpenGL is not really meant to do calculations… that is what OpenCL is for.

I’m in agreement with previous posters that the most likely practical solution for this is to render the image at a higher resolution (perhaps many times higher; draw the image one section at a time if necessary), and then average blocks of pixels. Recent ATI hardware re-introduces supersampling, with does this automatically (up to a factor of 8 samples per pixel).

As long as (1) the desired results can be approximated with point samples, and (2) taking more samples brings the approximation closer to the ideal results, then this probably the most sensible approach to take.

However, in the spirit of answering the question actually asked, here is a method that will probably do what you want, presuming you have recent graphics hardware.

Recent GPUs are so powerful and so flexible, that I believe you can implement a polygon clipping algorithm in the fragment shader, and thereby compute the geometric area in object coordinates that corresponds to the fragment on the screen. I expect it will be slow. It will probably be very, very slow.

But hey, you can get the area of every fragment computed to the limits of floating point accuracy!

The excruciating details follow:

Let’s start with drawing a single triangle.

Pass, to the fragment shader, as uniforms, everything that you would normally have provided to the vertex shader: The 3 triangle vertices, and whatever transformation matrices you would normally apply to them. You will also need to provide an inverse transformation matrix, and the screen/window size in pixels.

Draw a full screen quad. This will ensure that for every pixel on the screen, a corresponding fragment will be sent to the fragment shader (aka pixel shader).

Inside the fragment shader:

  1. Find the triangle vertices in screen space. Transform the triangle vertices from “object” space to screen (pixel) space by multiplying the vertices by the transformation matrix, performing perspective division, and scaling the result to screen pixels (this last scaling would normally be specified with glViewport).
  2. Find the fragment corners in screen space. This is easy. If I understand correctly (gl_FragCoord.x, gl_FragCoord.y) is lower left corner of the fragment, and (gl_FragCoord.x+1, gl_FragCoord.y+1) is the upper right corner of the fragment.
  3. Compute area. Find the polygon that is the intersection of the triangle and the fragment rectangle. Project the polygon vertices back into object space using the inverse matrix, and compute the area of the resulting polygon (convex, with up to 7 sides, I think).

Step 3 in more detail, showing some of the special cases:
(a) If the 3 triangle vertices are all to the left, all to the right, all above, or all below the boundaries of the fragment, then the triangle is entirely outside the fragment. The fragment has zero area.
Otherwise,
(b) If all 3 triangle vertices are entirely inside the fragment boundaries, then the triangle is entirely inside the fragment. The fragment area is the triangle area (computed using the original vertices in object space).
Otherwise,
(c) If all 4 fragment corners are entirely inside the triangle, then the triangle covers the fragment entirely. Project the fragment corners back into object space with the inverse matrix and compute area of the resulting polygon (a trapezoid, I think).
Otherwise,
(d) The triangle could be partially overlapping the fragment. For each side of the fragment, construct a line along the fragment edge, and cut off the part of the triangle that is “outside” of that line. Whatever part of the triangle is left afterwards (possibly nothing) is inside the fragment. Project the points of the resulting polygon back into object space and compute the area of the polygon.

[The same sort of polygon clipping algorithm is found in the vertex processing stage, where it is used to clip polygons to fit inside the view volume. Although now typically done by fixed-function graphics hardware, this used to be done in software, so you can find books that will explain clipping algorithms in detail. Traditionally, the initial triangle is broken into multiple triangles during the clipping process rather than being maintained as a true polygon, which is especially sensible when the next processing stage only understands how to draw triangles anyway. Also, in practice, the case (d) logic handles case (c) as well.]

Repeat the process for each triangle you want to draw – update the fragment shader uniforms with the new triangle details, and draw another fullscreen quad. Of course, you don’t really need to draw a whole fullscreen quad – but you have to be sure to cover any pixels touched (even a little bit) by the triangle. Imagine a triangle bounding box with 1-pixel margins, or a triangle based on the original triangle with the edges offset outward by one pixel. The vertex shader could be leveraged to help with this part.

To be really robust, you would need to implement near (and far?) plane clipping before the “perspective division” in step one, or risk divide-by-zero or other anomalies if polygon vertices touch or cross the eye plane. This would potentially split the original triangle into several triangles that need to be processed in steps 2 and 3.

I’m sure the concept could be refined considerably, and explained better, but hopefully this is enough to convey the basic idea.

===============================

So far as I know, and assuming I understand your goal correctly, this is the only method of doing exactly what you want using OpenGL. On even slightly older graphics hardware I doubt this would be possible at all.

If you are undeterred by the sheer magnitude of this project, I salute you!

I didnt read whole discussing, so this is wild guess.
Try this…

  1. Create rgb32f texture and store all vertices in it. xyz -> rgb
  2. Create int32 texture and store triangle indices abc -> rgb
  3. Create render target… for example float32
  4. write shader…
  • fetch 3 indices (just one texel from indices texture)
  • fetch 3 vertices (just one texel from vertices texture). This will require some math to convert index to texure coordinate
  • apply modelview transformation on those 3 vertices
  • apply projection transformation
  • transform to screen space (using viewport informations)
    Now you have 3 screen coordinates. Use this math http://www.mathopenref.com/coordtrianglearea.html to calculate area of triangle. Area should be number of pixels covered by that triangle.
  • Write result to output.
  1. Using above prerequisites, bind render target to fbo, bind two textures, bind shader, and render screen aligned quad. Result is texture with area for each triangle.

Blah for being a C++/Java guy trying to RAPIDLY learn / perform OpenGL / GLSL lol

As always, thanks again for everyone’s input and putting up with my noobishness. I’m pouring through all of this information. Even stuff I might not end up using is helpful, as I’m still fairly novice at this stuff and it helps me understand it better and helps understand the reaches / limitations of OpenGL / GLSL.

When you guys say to render the image at a higher resolution, I assume you mean to increase the image size, the logic being that now the object will be composed of more pixels at farther distances. Unfortunately, the image size is more or less fixed for this app (for the time being, at least), but if I did go with that (for the sake of understanding where we’re going with this) wouldn’t this just buy me more time until it reached a point at which it repeated this problem?

Or is there some way of rendering the object at increased resolution in memory without rendering it to the screen that way? I think I’ve heard of such things, but I don’t know much about the details… :stuck_out_tongue:

That was written in a bit of haste and is pretty ripe with typos and skipped notation, the top should be 400 m sq, as it’s the surface area. After carrying out squaring of the bottom, you get => (400m sq) / (.16m sq) = 2,500. The 50x50 was just citing my previous size for the object at that distance and showing the results matched (though it’s not always that exact, but usually in the ballpark, and this really only works for a quad perfectly tangent to the LOS.)

Yeah… I haven’t read much on OpenCL. Heck, don’t really know OpenGL especially well to be honest. The only issue with converting to OpenCL is there’s bukoos of code already written here in OpenGL… I don’t think suggesting converting would make me popular. Hahahaha. But in the long run, I agree with you, the precision and the rastering are inevitably going to be the killer. Maybe OpenCL will have something available… hm.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.