Vertex position in fragment shader

I know this is a dumb question, but I am kind of stack after the whole day of solving unrelated problems. So…
In the application I create a vetex after glBegin(GL_POINTS)
In my verex shader I set the point size to something let’s say 20 and in the gragment shader I am trying to cut off the corners to convert the square to a circle. So here are two questions:

  1. How to find out the correct point size in the vertex shader, taking account the current perspective projection ?

  2. In the fragment shader how do I map the view coordinates of the vertex to the fragment coordinates ?

The problem with Point Sprites is they are kind of a half way house between real geometry and a 2D blit. To affect point size in shaders you need to ape the OpenGL GL_POINT_DISTANCE_ATTENUATION function which can be found on most shader tutorial sites, like say the LightHouse3D one.

As for “perspective projection” if I understand you correctly you would be into a world of hurt with quite heavy maths in the fragment shader - which would slow things down IMO. I am not sure how good the results would be, or if the effort would be worth it.

So much better to do QUADS as any perspective will be correctly calculated intrinsically. With Triangle strips or indexed triangles you can reduce the vertices, of which the volume on modern HW is not going to be a huge issue anyway. With Geometry shaders you can even describe them with a single point.

If OTOH you want to simple create a smoothed point or circle then you can use something like the following in your fragment shader…

	
vec2 mod_coord = gl_TexCoord[0].st * 2.0 - 1.0;
gl_FragColor = gl_Color * (1.0 - length(mod_coord));

If you want to read up about all of this then I would recommend the http://www.lighthouse3d.com/ site I mentioned earlier.

In my verex shader I set the point size to something let’s say 20 and in the gragment shader I am trying to cut off the corners to convert the square to a circle.

You’ve misunderstood what a point sprite is. Geometry shaders don’t see the implicit quad behind point sprites. They, like everything else, get one vertex per point sprite. They see a list of points. They can change the size of the point sprite. They can change the position. They can even create more points, or cull some points. But they cannot change the shape of a point; that is hardcoded.

If your intention is to make square point sprites into circles, there are two ways to do this. One way is to use the fragment shader to discard fragments that are not in the “circle” of the sprite.

Another is to convert the points into a series of triangles in the geometry shader. I wouldn’t suggest this method, myself. Using geometry shaders to do this is almost certainly going to be more expensive than culling in the fragment shader or just sending a list of triangles directly.

  1. Agreed, But isn’t it the essence of the original post ?
  2. Sure, The problem is that geometry shaders are not available everywhere yet. So I have no choice but to use the fragment and vertex shaders only.

I see what you’re saying. Somehow, I read “gragment” as “geometry”.

Anyway, it’s simple enough to use the gl_PointCoord to decide whether to discard a fragment or not. It doesn’t even take sine functions:


vec2 location = gl_PointCoord - vec2(0.5, 0.5);
if(dot(location, location) > (0.25)) discard;

0.25 is 0.5 squared, and 0.5 is the radius of a circle in point-space.

Thanks Alfonse !
Somehow I managed to calculate the correct point size in the vertex shader, taking into account perspective and combined with your suggestion about the fragment shader everything works like a charm. Now a bit more involved task to do the same with other primitives I have, the most complex is a cylinder. I am tempted to render the cylinder as a line, set appropriate line width and then shape the ends and color properly.

Couldn’t you just, you know, render a cylinder? With polygons and stuff?

That what the application currently does. But once the number of primitives like cylinders, cones etc. exceeds about 50,000 limit on most video-cards it just chokes.
If there is another way to render the same cylinder really fast without resorting to shaders I am in for it. But I do not know how to do it. There is a tremendous amount of tricks to speed up the existing rendering pipeline with regular opengl calls but I believe we already have approached the limit.

And besides; if two cylinders, implemented as a set of limited number of quads, are connected in a T-like fashion the connection looks really ugly when the picture is zoomed in.

But once the number of primitives like cylinders, cones etc. exceeds about 50,000 limit on most video-cards it just chokes.

I’d suggest using instancing.

If there is another way to render the same cylinder really fast without resorting to shaders I am in for it.

Wouldn’t the solution you propose be “resorting to shaders?”

And besides; if two cylinders, implemented as a set of limited number of quads, are connected in a T-like fashion the connection looks really ugly when the picture is zoomed in.

It certainly isn’t going to look better with your “not-quite-cylinders.” At least actual cylinders have proper collisions between them; the impostors you’re trying to create would not.

If you are going to go the impostor route, I would suggest image-based impostors. Have a texture with a series of cylinder images on it. Each image in the set would be drawn from a certain perspective, from 0 degrees (side-view) to 90 degrees (top-view). Then use shader logic to determine which image from the image set you should use for the given viewing angle.

Again. Instancing support is quite limited yet.

It certainly isn’t going to look better with your “not-quite-cylinders.” At least actual cylinders have proper collisions between them; the impostors you’re trying to create would not.

Let me respectfully disagree here.
Since a fragment shader has control over every pixed it is certainly possible ( and not difficult at all ) to create an ideal intersection of two primitives. And if one knows what he is doing the “impostor” approach is always going to look better.

Again. Instancing support is quite limited yet.

There are two kinds of instancing: the shader kind and the array kind. Unless you’re targeting pre-DX9 hardware, one or the other will be supported.

And if you are targeting pre-DX9, then you’re going to find that fragment shader functionality on such hardware to be exceedingly limited.

Since a fragment shader has control over every pixed it is certainly possible ( and not difficult at all ) to create an ideal intersection of two primitives.

But the fragment shader does not have complete control. It can output colors and even depth, but changing the number of samples (for example) is not possible. So there would be no MSAA among impostors.

And if one knows what he is doing the “impostor” approach is always going to look better.

Outside of the above MSAA thing. And I thought that the reason for going the impostor route was performance, not visual quality?