# Number of pixels in a fragment - possible to calc?

So… I’m not an especially great graphics programmer, so please bare with me. I’ve got a series of what might seem like rambling and/or stupid questions, but I’ve just been dropped into this head first and am kind of at a loss as to accomplish this.

Is there any possible way - in the fragment shader - to determine the area (ie. the number of individual pixels it contains) of the fragment being worked on?

There’s another issue we’d like to work out but I’m having trouble describing it… Imagine it this way…

Take a 10 by 10 quad. Define some arbitrary point p(x,y) existing somewhere within that 10x10 space. I need a way to know (again in the fragment shader):
r = distance from p to center of fragment

From the little bit that I know about shading languages (which is VERY little), I don’t know if either of these questions are possible… does anyone know a way to accomplish it? I’m sorry if these questions are a bit vague… I’ll do my best to explain better if anyone requests it. Any suggestions or guidance will be greatly appreciated.

Is there any possible way - in the fragment shader - to determine the area (ie. the number of individual pixels it contains) of the fragment being worked on?

Presumably you mean the number of samples. Unless you’re doing multisampling, each fragment’s outputs will be written to a specific pixel value.

There’s no current way to know this, as far as I’m aware. But it shouldn’t be something you should be concerned with anyway.

Take a 10 by 10 quad. Define some arbitrary point p(x,y) existing somewhere within that 10x10 space. I need a way to know (again in the fragment shader):
r = distance from p to center of fragment

This is easy. Fragment shaders have access to gl_FragCoord, which is the fragment’s window-relative coordinate. So long as the arbitrary point in question is in window-relative coordinates, you can just do a distance between the two points.

Thanks so much for your fast reply! I really appreciate the quick help!

Unfortunately we sort of are. Well… someone else is and it’s trickled down on me I can’t really get into specifics because of the project but the idea is we need to integrate the value of something over the area of the fragment - the value changes over that area (decreasing as you move away from that point p I described) - very slightly but in what we’re doing even slight differences can have moderate impact. It MIGHT not even change at all over the size of a fragment but I’d like (if there’s a way) to figure out how to do it so if they decide they’re hell bent on implementing it I have it ready. If it’s just not possible of course, then it’s just not possible

This is easy. Fragment shaders have access to gl_FragCoord, which is the fragment’s window-relative coordinate. So long as the arbitrary point in question is in window-relative coordinates, you can just do a distance between the two points.

This is good… this might be exactly what I need. So I’m at least one for two! haha.

Well, actually you can enforce the number of samples taken for shading. Perhaps you can check out the new extension specs where you may find more info, for instance http://www.opengl.org/registry/specs/ARB/sample_shading.txt
Yes I think you should look into that!

That doesn’t help. What he’s looking for is the area that a particular fragment takes up, not how many samples the renderer happens to use. Making the rasterizer create more samples will not cause the primitive to take up more area in a pixel.

True but, from the spec:

"
This also extends the shading language to allow control over the
sample being processed. This includes built-in fragment input
variables identifying the sample number and position being processed
when executing fragment shaders per sample.
"

I think they may need something like that.

I’m under the impression that you might have your terms somewhat mixed up. A fragment always exactly covers one pixel. Primitives (triangles or quads for example) are split up in smaller fragments by the rasterizer to determine which pixels on the screen are covered by the primitive.

I assume you want to calculate the area of a primitive in the fragment shader? If you actually mean fragment, I’m unsure what you want/mean.

Again many thanks to everyone for all of the fast replies, this is hopefully going to make my life a lot easier.

Really, a fragment is always exactly one pixel? I was of the impression that it was a SMALL collection of pixels… hm…

Okay, here’s a general idea of what we’re trying to accomplish. Shine a light source on an object. In this case let’s keep going with that 10x10 quad I mentioned. (Before you ask, the built in OpenGL lights are out for this.)

The size / intensity of the light expands from the source toward the object, what we more or less need to determine is what intensity the the light is at by the time it gets to the object. The engineers / optics guys / physicists are working on the formulas (formuli?) for that.

What they came up with - and gave to me, the non graphics programmer, to implement (trickle down responsibility haha) - was that while they could come up with the functions, to get an accurate value per fragment they would need to know the area of the fragment so they could integrate the intensities of the light over that area.

I suppose if every fragment is only one pixel, that’s probably not such a big deal.

One thing I had a question about though, Alfonse said above that gl_FragCoord is window relative… does that mean it’s stored in 2D space rather than 3D? For this to work I think I’m going to need to know the fragment’s 3D location…

I think this is generally true, unless you have SSAA (a supersampling antialiasing rasterization mode) enabled, in which case it’s smaller than a pixel. Fragment = the shading rate.

And at least conceptually (if not officially) the fragment area is reduced at the edges of triangles to be only the portion of the pixel (or sample) within the triangle. That’s how MSAA (multisampling) works: each fragment has a coverage value with SAMPLES bits and has SAMPLES depth values associated with it. You can use that fact in shading by using centroid interpolation (see this). Come to think of it, if doing MSAA you could probably use centroid to approximate the percentage of the pixel you’re computing a shading result for this way.

Quoting chapter 3 in the spec’s (2.1):

A grid square along with its parameters of assigned colors, z (depth), fog coordinate, and texture coordinates is called a fragment; the parameters are collectively dubbed the fragment’s associated data. A fragment is located by its lower left corner, which lies on integer grid coordinates. Rasterization operations also refer to a fragment’s center, which is offset by (1/2, 1/2) from its lower left corner (and so
lies on half-integer coordinates).

From this one can induce that the area covered by a fragment will always be 1 in grid coordinates (ie pixels).

This does not change with MSAA as MSAA only changes what is stored per fragment:

Each pixel fragment thus consists of integer x and y grid coordinates, SAMPLES color and depth values, SAMPLES sets of texture coordinates, and a coverage value with a maximum of SAMPLES bits.

At least that’s what I get from reading the specs…

Not even sure if anyone is still following this thread anymore. Really hope so because I’m really close on to something but not sure how to overcome a small hurdle.

Once I knew that a fragment was always one pixel, the surface area per fragment was fairly simple. If the object was of size NxN (for simplicity), measure the number of pixels the object takes up in one direction at a given distance, call it P. The width per pixel should roughly be N/P. ie. if the object is 10 meters x 10 meters, and is represented 100 px x 100 px on the screen, each pixel represents 0.1 meter horizontally and 0.1 meter vertically, or 0.01 sq meters (reversing the process, 0.01 sq meters * (100x100) = 100 sq meters, the original surface area.

A crudely simplistic example but enough that repeating it over a sample of several dozen distances I was able to develop a linear trend line (distance * constant) that accurately predicted the per pixel surface area within a few decimal places for several ranges. (We’d actually like to get it down to lower than this if possible, and hopefully what I get to next will help with that).

The problem becomes when the object is placed at the extrema of its visibility. Say the object stops being visible at distance D. Prior to that, the object remains at a size of some small MxM (in this case, 2x2) for a significant amount of time, meaning the surface area per pixel remains the same (plateaus more or less) over this span until the object disappears. (It actually starts to spike and plateau before this a little but this is where it gets extreme.)

So the problem is that toward the end of visibility an object no longer follows this trend line, and while this would be okay if we were only representing this NxN test object, obviously we’re representing tons of objects and have no way of knowing in the fragment shader what object we’re working on. So while an NxN object will depart from the trend line at some distance D-delta, an object of size 2Nx2N might not depart from the trend line until something like 2D-delta (or something close, I haven’t done the exact math on that part).

I’m PRETTY SURE the reason for the plateau is that when the object is placed at an extreme distance, it starts taking up fractions of pixels, something like 1.75px x 1.75px and gets rounded up to 2x2, then move it back and it becomes 1.66px x 1.66px and gets rounded up to 2x2, until eventually it reaches 1x1 and eventually disappears.

The only solution to this problem I can think of is to check if a pixel is a partial pixel, if GLSL contains any kind of support for this, but I would imagine which pixels were on and which were off would already have been determined in the vertex shader (just a guess, I’m not especially great at this…)

I don’t really know what else could be done. Any guidance / advice would be GREAT at this point.

EDIT: as an aside, we’d like to try and avoid using antialiasing if at all possible for speed reasons. But if that’s the only way to achieve this… then maybe we’ll have to switch over to that.

Alfonse said above that gl_FragCoord is window relative… does that mean it’s stored in 2D space rather than 3D? For this to work I think I’m going to need to know the fragment’s 3D location…

With 2D window space + depth, you can reconstruct full 3d position, as is often done in “deferred rendering”.
With depth you will know the exact distance, and infer the surface of fragment in 3D space, without any plateau.

Thanks for your reply. I’m afraid I don’t completely follow though. Can you elaborate on what you mean a little? I think I understand - that you’re saying I can get the 3D coordinates of the fragment from the 2D coordinates and the depth - but what do you mean about the plateau? The plateau issue I was referring to had to do with partial pixels being rounded up to full pixels (I THINK, don’t know that for sure)… could you help me follow what you meant a little better?

In fact you write too much, I am not even sure of what you actually want to achieve, so it hard to help you

Care to post some diagram/picture ?

If you want to avoid antialiasing, you can just render things at a higher resolution to gain more precision. For example, render your NxN patch at 2Nx2N and your problems will be postponed.

This can easily be done by making a framebuffer object that has a size twice as large. For example, if you were using a framebuffer at first that was 400x400 pixels, now use one that has 800x800 pixels. Your NxN object will now cover 2Nx2N pixels and thus you will gain precision when the object is so small that it almost disappears.

But the issue will still occur eventually, yes? Even if I quadrupled the resolution, eventually it’s going to get into the range where it’s fractionalizing pixels and I’m going to end up with the same size for a long time?

Come to think of it, this is probably happening a lot at other ranges and I just wasn’t noticing it. The pixel count I’m sure isn’t changing if I move the range by a delta of something small like 1 meter. Hm. I really need some way to know what percentage of a pixel is being filled, or something similar. Some way to gauge this… but I don’t know if such a thing exists.

Heh. Okay I’ll try. We need to know the surface area of each pixel of any object at any distance away from the camera. The problem is the fragment shader (to my knowledge) has no concept of objects, just pixels.

Here’s a crudely drawn set of slides depicting the problem. The images are scaled down really small here on the forums, BUT you can right click and view full image to see them.

(And yes, the formula I mentioned in the slides works regardless of object size, it works for a 5x5, 10x10, 40x40, etc, UNTIL it reaches it’s vanishing point.)

Like I said, if there’s a way to determine if a pixel is partially filled or something like that, it would more or less solve the problem, but I’m not aware of anything that exists that can tell me such a thing.

What about sending the object size as a uniform, so that the shader will know the object size ?

That would work if there was always only one object, but ultimately there will be multiple objects, dynamically created. I’m not sure how I could do that with multiple objects… though I’ll be the first to admit I’m not great at GLSL.

Update uniform for each object.

Or have that uniform as part of a shared Uniform Buffer Object.