Creating a 2d zoomable, scrollable universe

hi,

i am very new to opengl. i am working on a space_rts game. basically it is a generated universe where one can travel from planet to planet, develop and so on.
because everything is written in python and rendered in pygame, the performance of the rendering is very bad so i decided to render everything with moderngl.

i managed to render about a million gl_points that are textured, with animated shaders with acceptable framerate.

But now, i struggle with the concept, how to manage all the rendering, and interaction. Basically the concept, which path to go.

if have made several systems:

  • one that is renderng quads ( about 100’000 objects with acceptable framerate)
  • one that renders gl_points with amazing framerate ( 1’000’000 points with 250 fps)
  • one using pygame_render library, which makes it easier but not performant.
  • one using a quadtree and renders the objects by sending the calculated positions using a vbo and a vao for every object(pygame_render library) — thats slow

so i came up with the idea to render all in one moderngl.context, using a type property to change between the different shaders. like this i can draw textured objects and animated shaders in one render loop- i guess this is the most performant way.

Questions:

  1. Updating VAO: according to my tiny knowledge to glsl, it is not possible to store anything in the shader itself, so we need to send a vao to the shader to update the objects, if anything changes like positions and so on. therefore we need to update the vao every frame, which can be quite costly. how would you manage the updating of the vao? should i setup several systems? one for static objects, like stars in the background and other systems for dynamic objects?

  2. Change between rendering gl_points and quads (triangles): is it possible and a good idea to change inside the shader between the render method? means: as long as the size if an object is in the range of the GL_POINT_SIZE_RANGE, then render them as gl_points. if the size is bigger than GL_POINT_SIZE_RANGE, then switch to TRIANGLES ?

  3. GeometryShader: many AI’s told me this is the way to do it. would that work to change between the rendering methods?

  4. Basic Concept1: send a vao with all the millions of points to the shader. the shader decides then what to render and what to render and how it will be rendered. if any objects needs to be updated, then only update these types by writing the vao like this: instance_buffer.write(np.hstack((points, types.reshape(-1, 1))).astype(“f4”)) ?

  5. Basic Concept2: create several contexts. one for the static objects that are not changing their position at all, one for moving objects like asteroids and one for objects that are interactable like suns, planets, moons. basically everithing that can be selected?

  6. Interaction: i cannot see any other way than handling the interaction in the python code and then send the the new vao to the shader. it would be possible to select an object in the shader by checking the mouse position and the gl_position or vertex_coordinates and change the appearance, but how to get tis data back into the python programm ?

i hope someone can give me some hints, which way to go, and above all, where i might be wrong with the concept. Thanks for reading !

You normally want to separate static and dynamic data. This includes separating static and dynamic properties of primitives. E.g. if you frequently change the position of a sprite but never change the texture coordinates, those attributes should be in different buffers, or at least in separate contiguous regions of a buffer, rather than interleaved.

You can’t really do this in the shader. Any given draw call is restricted to a single primitive type. Also, geometry shaders (which can convert one primitive type to another) tend to have a significant performance cost on discrete GPUs (Nvidia/AMD).

Rendering sprites by instancing quads (triangle pairs) used to be inefficient due to implementations not coalescing instances into a workgroup. Apparently that’s no longer the case with modern hardware and/or drivers, so that gives rise to a number of potential strategies.

Geometry shaders allow you to restructure the data. So you can perform a draw call using GL_POINTS with one vertex per point, and have a geometry shader convert each point to a pair of triangles. But the consensus seems to be that the performance hit of geometry shaders is too great, and it’s rare for them to be used in contexts where performance matters.

Ultimately you need to profile your code on the systems which you care about. Which means having access to those systems. One of the biggest issues for independent developers is being able to test the code on a wide range of hardware.

Note that a shader program can’t choose the output primitive type for each primitive independently. If you have a geometry shader with an input type of points and an output type of triangle_strip, every input will be a point and every output will be a triangle strip. If you want a mixture, you’ll need a separate draw call for each output primitive type.

1 Like

thanks a lot for the detailed answer! it definitely clears up my main questions ! :ok_hand: