Barycentric coordinates and more

This suggestion is relatively straight forward: expose in a useful way the barycentric coordinates of a fragment. I propose the following details:
[ol]
[li]a built in vec3 giving those coordinates (s0, s1, s2) defined in the fragment shader[/li][li]a mechanism to fetch the values of an input of the fragment shader for each vertex of a triangle. One possible method is to defined a function U(x, I) where x is the name of an input into a fragment shader and I is 0, 1 or 2 signifying which vertex of the triangle to grab[/li][/ol]

Before anyone jumps up and down and says one can get this via a geometry shader, I’d like to point out that such a system would be terribly inefficient since to do (2) above would induce a heck of a lot more in’s to the fragment shader and doing (1) also induces an extra in. On a related note, I also have this suggestion in mind for OpenGL ES [the GLES message boards are kind of barren really, the last message being posted in last July].

Naturally this jazz above needs some additional tweaks to handle point and line rasterization (I suggest that s2 is made 0 for line rasterization and that U(x, 2) is an implementation dependend undefined value).

The mentality of this suggestion is that those numbers are likely sitting around anyways (at the very least the values of (2) are around, though the barycentric coordinates of a primitive may or may not be explicitly calculated).

You’re not the first to think about this :slight_smile:

Why not a vec3 Barycentric(vec3 ref, vec3 v1, vec3 v2, vec3 v3) that return into a vec3 the 3 barycentrics coordinates of ref relative to the triangle (v1, v2, v3) ?
(this can too only return a vec2 since the z coordinate returned is systematically = 1 - x - y)

Or a more simple vec2 Barycentric() without arguments if we are into a fragment shader because the v1, v2, v3 and ref coordinates are already implicitly know at this stage :slight_smile:

The mentality of this suggestion is that those numbers are likely sitting around anyways (at the very least the values of (2) are around, though the barycentric coordinates of a primitive may or may not be explicitly calculated).

I’d be interested to see evidence that barycentric coordinates are “sitting around anyways” in fragment-shader-accessible memory, across multiple iterations of various hardware. Interpolation is handled by the rasterizer, not by the fragment shader. And I’m guessing that it doesn’t even really use barycentric coordinates to do interpolation.

I’d be interested to see evidence that barycentric coordinates are “sitting around anyways” in fragment-shader-accessible memory, across multiple iterations of various hardware. Interpolation is handled by the rasterizer, not by the fragment shader. And I’m guessing that it doesn’t even really use barycentric coordinates to do interpolation.

I did clearly state that the barycentric coordinates may or my not be hanging around. However the original numbers, those from the vertices of a primitive most certainly are. If the barycenric coordinates are not hanging around, for an implementation to support giving them, then likely a hardware interpolator would get used for such a shader. In truth, I strongly suspect that more hardware implementation do NOT have the barycentric coordinates hanging around explicitly and instead a rasterizer computes the coefficients for interpolation of each varying directly. At the very least supporting this from GL rather than doing it via geometry shaders is worthwhile since the geometry shader way would force that a geometry shader is needed and well, in GLES land, there is no geometry shader and even in GL land, geometry shaders, even those that only emit one triangle, are nasty mean things for many implementations because they have SO much input data.

This is not exactly true. Many hardware today is actually doing so called “pull-model interpolation”, which means that the interpolation actually happens in the fragment shader and the main inputs are the barycentric coordinates which are then used to interpolate the per-vertex attributes. So what kRogue requested is actually quite possible at least on some hardware.

Update: IIRC, Fermi and Evergren, i.e. practically all DX11 GPUs do support pull-model interpolation.

AMD does it that way on the southern island chips according to their open ISA documents:

Pixel shaders use LDS to read vertex parameter values; the pixel shader then interpolates them to find the per-pixel parameter values.

The GPU has some dedicated interpolation instructions for the fragment shader.

However the original numbers, those from the vertices of a primitive most certainly are.

How do you know that? Why would it have those original numbers, when all shading languages just use the interpolated values? The rasterizer may have them, but that doesn’t mean the fragment shader does. And it doesn’t mean the rasterizer can be modified to pass this data along without direct hardware support.

in GLES land, there is no geometry shader

All of the information currently available suggests that such information is only available in DX11-style hardware. So I rather doubt that any (non-NVIDIA) GLES hardware has this stuff available.

Update: IIRC, Fermi and Evergren, i.e. practically all DX11 GPUs do support pull-model interpolation.

Are we forgetting about Intel’s DX11 GPUs?

How do you know that? Why would it have those original numbers, when all shading languages just use the interpolated values? The rasterizer may have them, but that doesn’t mean the fragment shader does. And it doesn’t mean the rasterizer can be modified to pass this data along without direct hardware support.

Take a look what mbentrup and aqneup have said: AMD and NVIDIA hardware support “pull model interpolation” which means that since they can use baycentric coordinates to generate interpolator values, then the values of the interpolators at the vertices of triangle then must be available to the fragment shader (or at the very least can be computed by passing (1,0,0), (0,1,0) and (0,0,1) as barycentric coordinates to get the value).

Are we forgetting about Intel’s DX11 GPUs?

I know at times I wish I could forget about Intel GPUs (especially on Linux, the GL implementation of Intel on Windows is not only a completely different code base, it is also much better). However, if Intel HW cannot do this, but the idea is pushed forward then the following:

[ol]
[li]For GL4 generation this is an ARB extension[/li][li]Gets promoted to core for GL5 if next generation Intel hardware has the capability[/li][/ol]

The extension has neat interesting possibilities with regard to controlling (via discarding) how a primitive is filled. Moreover a logical extension of this idea is to have within the pipeline the ability to specify what range(s) of barycentric coordinates are rasterized. Though, this is much harrier to specify intelligently and in a way that keeps the hardware happy.