My game’s world is very simple: quads for the walls, floors, and roofs. Think Wolfenstein or Doom. All quads would use one single texture that contains all the art (like a spritesheet).
I’d like to use the same quad vbo for all map pieces (walls, floors, and roofs) but since they all tend to have different dimensions, I’m not sure if to:
1) Create 1 quad vbo that represents each map piece, and scale the game object according to the map's needs.
2) Make a quad vbo for each different map piece.
3) Try to put the every (static) map piece into one vbo.
Sorry if I got any terminology wrong, I can further explain my question if needed.
[QUOTE=GClements;1283913]If you could render everything with a single draw call, then you should probably be using a single VBO (or a single VBO per attribute).
If you’re mixing static and dynamic data, values which may be modified should occupy a separate region of the VBO to those which are constant, to minimise the amount of data transferred.[/QUOTE]
Thanks for the quick response. Just to make sure I understand, when you mention occupying a seperate region of the same VBO: in a way it kind of works like a sprite sheet but for verts? From a to v I’ll find the data needed to render object a, and from x to z, i’ll find the data needed to render object b?
i prefer using 2 buffers, 1 with static data (for model geometry) and 1 for the dynamic (instanced) data, like model matrices, ID (to identify objects) etc.
so my approach would be: putting the “wall” vertices (quad) in the static buffer, and for each wall i want to draw, i would put a model matrix into the dynamic buffer (for instanced rendering)
the model matrix can control:
– where to draw the wall (translation)
– what rotation the wall has (rotation)
– how big the wall is (scale)
… since it is build that way:
glm::mat4 modelmatrix = glm::translate(wall.position) * glm::toMat4(wall.rotation) * glm::scale(wall.size);
if you want a wall of size 10 x 3 x 1, call:
glm::scale(glm::vec3(10, 3, 1))
Thanks guys, much appreciated. I was originally thinking of taking John Connor’s approach, so I’ll take that route. Just need to make sure i repeat the uv’s correctly so textures don’t look off. Thanks again!
Begin by ruling out option 2; that would not only require a separate draw call for each quad, but also a state change (buffer bind) for each. That’s going to be horribly inefficient; you would actually run faster with glBegin/glEnd code.
Option 1 can seem attractive. You’re only storing vertex data for a single quad, so you get to save a ton of memory. That’s got to be a good thing, right? Actually, very probably wrong; this can go wrong in one of two ways. First way is that you might send a new transformation matrix to the GPU for each quad you draw; again, that’s a separate draw call for each quad, and this time it’s also a data upload for each, which will prevent the driver from being able to draw them efficiently - you’re back to “glBegin/glEnd would probably be faster”. Second way is to add a per-instance matrix for each quad and handle them all in a single draw call, but assume a vertex is 4 floats in this setup (xy|st) - a full quad is therefore 16 floats, but the matrix you’re adding for each quad is also going to be 16 floats! In other words, you save absolutely no memory whatsoever.
Approaching this with the desire to save memory is going about it the wrong way. Memory is a cheap and plentiful resource that’s there to be used, especially in the context of the kind of data set you’re talking about. So use it. Preprocess your geometry, move it to the final position as an offline step, then load it all into a single big static buffer and be done with it. This is what much more advanced games than the level you’re aiming at do.
Whether the data is static or dynamic is distinct from what that data represents.
In a typical game map, most of the geometry is static, but some of it (doors, platforms) can move. Similarly, most of the geometry will have static texture coordinates, but some of it may be dynamic (either switching between discrete frames or scrolling/rotating the texture).
Ideally, you would group dynamic elements together so that you can update the values by replacing a relatively small portion of the buffer. It’s safe to assume that there will be both a fixed overhead for each call (glBufferSubData() or glMapBuffer() etc) and additional overhead proportional to the amount of data transferred by the call. Also, there are cases where modifying a small amount of data may result in an entire page being copied, so sparse modifications scattered randomly throughout a mapped region may end up transferring the entire region.
If there is a lot of dynamic geometry, it may be worthwhile using separate shaders for static and dynamic geometry, with dynamic geometry having an associated transformation. But if the data is mostly static, you may better off simply modifying the dynamic sections in the client (although double-buffering such portions may be wise). Unless the dynamic portion requires additional attributes, there’s no particular need for it to use separate buffers to the static data, although doing so allows you to set different usage hints.
Another issue is that interleaved attributes tend to be more efficient (for the GPU) due to better cache locality. But if some of those attributes are static while others are dynamic, interleaving increases the amount of data which needs to be transferred if the data is modified by the client.