Blowing up Pointsizes ehem

Im trying to render a set of dots (with neighbouring dots having equal distances to each other) with GL_POINTS, yet without the space inbetween neighbouring points. So changing the point size is the first thing I tried, but as you know without distance attenuation. So I played around with glPointParameter and set distance attenuation only to find out that the point size has an upper limit, so near dots would still have spaces in between (and other glitches). All in all this method (simply working with glPointSize) is not very satisfying.

I wondered if by means of some vertex or fragment shader (not geometry shader, since I want to support a wide range… but that would obviously work) could solve that problem.
Now this is just a hobby project and I don’t have too much spare time to learn GLSL. So I’m not asking for directions to shader code or anything, just whether it’s a problem (efficiently) solvable through vertex/fragment shaders or not.

Thank you very much.

P.S. I don’t want to use cubes (or similar complex geometry) because it blows up memory usage tremendously without it actually being necessary for my point sets.

Im trying to render a set of dots (with neighbouring dots having equal distances to each other) with GL_POINTS, yet without the space inbetween neighbouring points.

Don’t. Unless your point size is exactly 1.0, and your point positions are in the exact correct place, this won’t work.

You should use a texture to store your point data and render a simple quad to display it.

If I understand your problem correctly, use GL_VERTEX_PROGRAM_POINT_SIZE. See the NVidia CUDA demo with all the bouncing particles for source and a demo. Those get up to the full size of the screen, and they appear to be culled as quads (what you’d want) not points.

I don’t think I really understand what you mean by that. Are you talking about 3D textures?

I might not have been too clear about what I am doing.

I have a large set (a few million) of 3D points (similar to volume data…) to render. At the moment I’m doing this with a DisplayList and cubes rendered with GL_QUADS.
It doesn’t really matter to me whether they are rendered as perfect cubes or not, just that a connected neighborhood of points doesn’t have any holes.
But as it is I am sending a lot more data to the GPU than I’d really need to render my scene … provided there is a way to let OpenGL adjust point sizes properly, i.e. without size limits.

Thank you very much. Although I can’t try out the demo (no CUDA card) I guess I can still analyze the shader code and usage of GL_VERTEX_PROGRAM_POINT_SIZE.

Thank you very much. Although I can’t try out the demo (no CUDA card) I guess I can still analyze the shader code and usage of GL_VERTEX_PROGRAM_POINT_SIZE. [/QUOTE]
Yeah, it’s pretty simple. Just glEnable( GL_VERTEX_PROGRAM_POINT_SIZE ) and then set gl_PointSize in the vertex shader.

The default GL_POINT_SIZE_MAX is 63, but I think that’s only used when you’re letting the FFP compute the point size. Vertex shader gets to do what it wants.

I think that’s only used when you’re letting the FFP compute the point size. Vertex shader gets to do what it wants.

No. The point size range (maximum, etc) is true for shaders too.

Well, it actually worked.
I wrote a quick heightmap loader and made some screenshots:

Points:

Points close up (they can fill the whole screen now)

Cubes:


Shader source:


uniform float pointSize;
uniform float pointScale;
uniform vec3 cameraPosition;
void main()	{
	float distance = length(gl_Vertex - cameraPosition);
	gl_PointSize = pointSize * pointScale / distance;
		
	gl_Position = gl_ModelViewProjectionMatrix * vec4(gl_Vertex.xyz, 1.0f);
		
	gl_FrontColor = gl_Color;
}

And before rendering:


glEnable(GL_VERTEX_PROGRAM_POINT_SIZE);
		
glUseProgram(renderEnv.shaderProgram);

// This reqires "-DGL_GLEXT_PROTOTYPES" to be added to CFLAGS.
glUniform1f(
	glGetUniformLocation(renderEnv.shaderProgram, "pointScale"),
	renderEnv.viewHeight / tan(renderEnv.scene.camera.fieldOfView * 0.5 * M_PI / 180.0f)
);
glUniform3f(
	glGetUniformLocation(renderEnv.shaderProgram, "cameraPosition"),
	renderEnv.scene.camera.position.x,
	renderEnv.scene.camera.position.y,
	renderEnv.scene.camera.position.z
);
// Full detail: 1.0
glUniform1f(glGetUniformLocation(renderEnv.shaderProgram, "pointSize"), 1.0f);

// Point drawing ...

Now, two issues I still have to deal with:

[ul][li]The larger the point size the slower the FPS (I guess because of overdraw?)[*]Standard lighting doesn’t seem to work any more.[/ul][/li]
It is actually slower than drawing cubes, which frankly surprises me a little bit. Do you have any idea how I might solve this problem?

I’m new to shaders so I assume the lighting problem occurs precisely because I use shaders and I’d have to implement the lighting myself. If that’s the case, how could I change my shader to implement standard OpenGL lighting?

Thank you very much in advance. I reckon these are very simple questions I guess, so if you think I should rather post in the beginner forum, please tell me.

About lighting in GLSL : http://www.lighthouse3d.com/opengl/glsl/index.php?dirlightpix
You need to compute a normal for each point sprite, otherwise lighting can not work. For example, average+renormalize normals from neibouring triangles (build temp triangles from height grid should not be too hard).

By the way, the screenshots have a nice “Comanche: Maximum Overkill” look, is that intended ?

That’s a pity.

Incidentally for the GL3.2 folks and for what it’s worth VERTEX_PROGRAM_POINT_SIZE is replaced by PROGRAM_POINT_SIZE (as geometry shaders can set gl_PointSize too) and you no longer need to enable POINT_SPRITE to get auto-generated texture coordinates…

You need to compute a normal for each point sprite, otherwise lighting can not work. For example, average+renormalize normals from neibouring triangles (build temp triangles from height grid should not be too hard).

By the way, the screenshots have a nice “Comanche: Maximum Overkill” look, is that intended ?

Well, I started it as a Voxel engine, so it is actually intended.
I wasn’t sure if everyone was familiar with it, and I’m not too sure whether this implementation could be used in a game, considering the memory requirements.

I’m approximating the normals with the following code:


int i;
int j;
int k;
Vector normal = {0.0f, 0.0f, 0.0f};
for (i = -1; i <= 1; i++) {
	for (j = -1; j <= 1; j++) {
		for (k = -1; k <= 1; k++) {
			if (voxelGetNeighbour(voxel, NULL, vectorLiteral((float) i, (float) j, (float) k)) == NULL) {
				normal.x += (float) i;
				normal.y += (float) j;
				normal.z += (float) k;
			}
		}
	}
}
normal = vectorNormalize(normal);
			
glNormal3f(normal.x, normal.y, normal.z);

…which works quite well, as long as you have real volumes (inside of an object is solid) or the surface is somewhat thick.

(Of course I mean it works quite well with the cubes implementation.)

Thanks for the link!
I checked the tutorial and although a good start, it does provide information for per pixel lighting, while I don’t even need that, simple per vertex lighting will do. Additionally, the shaders that are described there only work for a specific type of light source. I assume, to emulate standard per vertex lighting I’d have to check EVERY light source according to their type (point, directional, spot), right?

That’s a pity.[/QUOTE]
Feature or bug, don’t know, but it works. The 3.2 spec only says gl_PointSize is clamped to implementation specific limits. On recent NVidia GPUs, appears there aren’t any.

My card is not even a recent one… it’s a GeForce 6200 (about 5 years old I guess)

Feature or bug, don’t know, but it works.

That’s not a feature. Working against the spec is never a feature, no matter how nice the result happens to be. If they wanted to do this, they should have made an extension or something.

They may not actually have the hardware to expose point size to a program, so they just transform the points into quads manually. Which is also why the culling behaves differently.

NVIDIA needs to fix that.

I assume you mean culling as quads instead of points, not unbounded point size. Yeah, that’s likely an easy candidate to push up to spec level.

I assume you mean culling as quads instead of points, not unbounded point size.

That’s not a feature either. There should be no functional difference between having the shader compute the point size and using static point sizes. So if the point size is bounded when using static point sizes, it must be bounded in the same way when using shader point sizes. The spec specifically says this.

Does anyone know the reasoning behind this limit? Why allow different point sizes, but only up to an implementation dependent limit.

Anyway, I just tried out my shader implementation on a different PC (with a GeForce 6600), and there it’s blazing fast. Not as fast as the cubes implementation, but close.

Does anyone know the reasoning behind this limit? Why allow different point sizes, but only up to an implementation dependent limit.

Because, when point sizes were first being implemented, that made sense. Hardware could only make them so big.

<rant>

…which made the whole extension useless in general.

They should just set the maximum point-size to “real high” and be done with it.

Culling them on a point-basis is also useless. I don’t want them to suddenly pop away, only because their center has moved out of the screen.

I always wondered why they can’t just transform a point-sprite into a billboarded quad and then handle it just like a manually generated quad. Today we can do that through geometry shaders, but why implement the point-sprites in such a useless way in the first place? And then why put that extension basically UNCHANGED into core???

</rant>

Jan.

Because that would triple the memory consumption per sprite on the GPU for internal calculations. It would be quite pointless to make something like a Point-Based-Rendering system then since the advantage would be lost.

I imagine the reason we still have point sprites has more to do with backward compatibility with the fixed function pipe - can’t cut them loose quite yet. You can roll your own, so I consider the case closed (though we retain the right to complain bitterly about point sprites as we see fit). Point sprites: there when you need 'em, there when you don’t.