If I was to place a texture on the surface of a 3D object, for example a cube, I could use the vertices of that cube to describe the placement of this texture.
But what if I want to place multiple separate images on the same flat surface? Or suppose it is just one image, but I don’t want it to appear at the edges of the surface, where the vertices are, but rather somewhere small and in the middle of the surface. I want the actual images to be chosen and placed dynamically at runtime, otherwise I could condense them offline as a single texture.
I have an approach but I want to seek advice as to whether there is a better method, or if this is perfectly acceptable:
My guess is to create multiple separate 2D quads (with depth of 0), each with a texture associated with them and placed on them (they could of course be a texture atlas with different texture coordinates).
Then, I transform these quads such that they appear to be on the surface of a 3D object, such as a cube. Of course I’d have to maintain a matrix hierarchy so these quads are transformed appropriately whenever the cube is transformed, such that they appear to be attached to the cube.
While this isn’t necessarily hard, I am new to texturing and would like to know if this is a normal practice for something like this.
In my researching this, I found that what I’m asking is not much different than rendering differing glyphs from a bitmap font onto a surface, where each glyph is a different image taken from the same texture. One popular implementation I found was in fact doing essentially what I’ve suggested above (except with glyph images in a texture rather than other types of images).
There are 3 ways to render multiple textures onto a surface.
One is to do multiple render passes - one for each textures. This has the problem of z-buffer fighting and so only works well for 2D rendering where you can can disable depth tests.
The second options is to use multiple texture coordinates. This is easiest with you own shader but there is a partical limit to the number of textures you can bind similtaneously - usually 8 or 16 but it is hardware specific.
The third is to render mutliple images to different buffers and alpha blend the buffers. There is no limit to this method but it is slow and requires at least 2 view-size buffers
Or suppose it is just one image, but I don’t want it to appear at the edges of the surface
This is a different problem and can be handled in the fragment shader by discarding fragments whose u/v values fall outside a particular range.
Which of these methods is typically used when someone is “pasting” text from a bitmap font onto a 3D surface? Each glyph/image from the texture atlas will not overlap another, but simply be placed side by side to create a word.
Text is done one of two ways; if the text is changing often then each letter is rendered seperately otherwise a bitmap is created of the combined letters and this is rendered as a single texture. The second option is the easier to do.
But what is typically the way in which you might render text onto the surface of a 3D surface? Of the three methods you mentioned, what would work well in such a case?
You combine the individual the character bitmaps into a single bitmap then you only have 1 texture so you only have 1 call with single uv for each vertex adjusted to where it is on the combine bitmap.
If the object has a texture of its own then you use 2 sets of uv’s and bind the two textures which is the second option.
What are the functions used to combine bitmaps into one texture?