How to draw multiple different objects with the same geometry but different textures

I am trying to self teach myself a lot og graphic fundamentals, both applied and theoretical, by coding a minimalistc game engine, AKA I am making a minecraft clone.

For now I am just trying to render “chunks” (16x16x16 block chunks) in the most efficient way possible, and I only have 1 chunk so far (I will add more chunks later and then solve chunk edges).

So my current issue is that I have a data structure for a cube that looks as follows:

class Cube
{
    private:
        static Geometry *mesh;
        static Texture *texture;

        static string Default_Mesh;
        static string Default_Texture;

    public:
        static void make_template(string wavefront_file, string t);
        static void cleanup();

        vec3 position;
    
        Cube(vec3 p);
        Cube();
        ~Cube();

        void render_cube();
};

Geometry is an OpenGL wrapper for the mesh that contains the VBO, indices and related OpenGL info needed to render the mesh.

Since I assumed that all blocks of the same type (all grass blocks for example) would share the same texture and geometry I made them static pointers to optimize RAM space.

The way I currently render is, for each cube I load it’s geometry/texture and then call glDrawElements(); Which is quite trivially the worst possible way to do it

A better way to do it would be to draw all cubes in a single render call (no idea how to do this)

A better way would be to not draw the cubes, but only the faces of the cubes that are exposed (no idea how to render this, I don’t know how to indicate to opengl to use a different texture and transformation matrix per each x vertices).

An even better way would be to create a mesh per chunk, that minimizes the needed number of vertices to describe the geometry (36 vertices for a perfectly squared chunk) and then somehow specify that different subsections of this mesh are to be rendered using a given texture (as you can imagine, if I don’t know how to do the above, this one I just have no idea, I know how to generate the mesh, but the rendering part I am clueless).

I appreciate any help I can get

Instancing. This lets you generate MxN vertices using only M+N data.

As for using multiple textures within a single draw call, you have a choice between an array of samplers or an array texture. Each has different advantages and disadvantages.

An array of samplers requires one texture unit per texture, which may not be feasible if you have many textures. Also, arrays of samplers can only be indexed using dynamically-uniform expressions. For a vertex shader, this means that the expression must be constant for the entire draw call (using only uniform variables and constants). For a fragment shader, the expression must be constant over the primitive (any fragment shader inputs must have the [var]flat[/var] qualifier).

An array texture only uses a single texture unit. All layers have the same size and format, and sampling parameters (e.g. minification/magnification filters, wrap modes) are set for the entire texture. However, there are no restrictions on the expression used to select the layer.

Yes, I agree Instancing is what you need! I learned quite a lot following the tutorial here:

http://www.learnopengl.com

also made a video on the first method described in the tutorial, and posted it a few days ago on my youtube channel:

OpenGL Instancing with gl InstanceID Vertex Shader Visual Studio 2017

The technique could easily be modified to send different texture coordinates per instance and blend a different color per instance as well… maybe a future video I will make?? not sure…

Later,
Jeff