Please read the Linking Vertex Attributes section of this page. I’ve read the term a vertex attribute many times in this page but I’m not sure what it means for real or what the author meant by that.
If possible please tell me the exact meaning the author meant by vertex attributes.[/QUOTE]
In addition to what GCLements said, I might point out that it’s easy to not understand what a vertex is in the first place. When I first came across the term it was presented as a point in 3D space and I couldn’t understand why they didn’t just call it a point. The reason is that all vertices have points, but not all points are vertices.
The easiest way to think of a vertex, I think, is as the corners of a triangle and you store all kinds of data in them related to that corner. It can be the color of that corner for example. That probably doesn’t make sense until you understand that a whole lot of what is done in 3D graphics is interpolating (averaging) across the face of a triangle. One of the first programs they have you write is a 3 colored triangle where each corner has a different color. This is a near useless thing (it may be the last time for the next several years that you have a project where you give a triangle 3 completely unrelated vertex colors), but it does introduce the concept that you can interpolate (average) these values across the face of the triangle and come up with a completely different value (average) for any given pixel on that triangle that is an average of the 3 corners. So, you can store color data in a vertex as an example and this is an attribute of that vertex. (This may seem ridiculous to interpolate color values across a triangle like some 70’s tie die t-shirt, but this concept of interpolating across a triangle face is at the core of how UV mapping works and all modern graphics depends on UV mapping. So, this is actually one of the most important concepts you need to grasp, surprisingly.)
Likewise, you can store what direction that corner faces. This is called a vertex normal. (A normal is a vector that has a fixed length of one. A vector is an arrow that tracks direction and amount and the amount in a normal is 1 which means it is unused thus a vector normal is nothing but a direction without an amount.) It may seem useless or meaningless to know what direction the corner of a triangle faces, but it turns out this is at the core of a lot of what makes 3D graphics work. The basic lighting calculation is just a normal (3D arrow) that describes what direction the light shines in. When the triangle faces the light it receives 100% of the light’s color. As it turns to as much as 90 degrees it gets less and less of that color until it turns black. Anything beyond 90 degrees faces away and gets none of the color and thus is black. So to calculate that you need the direction the face of the triangle faces to know if it faces into the light. But this lighting makes the faces flat and you get a faceted 1980’s 3D look. So a trick you can do is assign every triangle corner (vertex) in the model it’s own direction (normal) and average it between the directions of all the triangle faces that use that vertex. So, then you can average the lighting across the face of every triangle between the corners and get the illusion of smooth shading. So, these normals are another attribute stored in a vertex.
Keep in mind that everything is optional except the position of the vertex. For example, it’s rare to see vertex colors used now days since we have texturing to determine color. And vertex normals are probably not needed if you have a normal map that maps the direction of every pixel in the model.
So, these attributes can change from project to project and you may even invent your own!
Another one that is really big now days is UV coordinates. With UV coordinates you are mapping the corners of your triangle to a flat photograph. The UV’s are the position on the photograph where that corner of the triangle would be. Then it averages (interpolates) these positions across the face of the triangle, or in this case across the pixels of the photograph to map pixels of the photograph onto the face of the triangle. Almost EVERYTHING is done this way now from color to normals to EVERYTHING. But UV coordinates are something else you can shove into a vertex and they would be attributes of that vertex.
So, the one thing all of them have in common is they represent some piece of data about that corner of the triangle. And most of the time you interpolate (average) those values across the face of the triangle (or rather the rasterizer does that before it gets to the pixel shader unless you turn it off).
So to sum it all up: vertices are just information about the corners of the triangles you use to draw the 3D image and that can include almost anything imaginable but it pretty much always includes a 3D position. Colors and normals are also something you will commonly see especially until you get into more advanced graphics. UV coordinates are really the gateway to advanced graphics. The first step there is textures or color maps. Normal maps, ambient occlusion maps, emmision maps, roughness maps, metalic maps, specular maps, and so forth all utilize the UV coordinates the same way the texture does of stretching the data across the face of the triangle by interpolating the UV coordinates across the face of the triangle, and you guessed it - those UV coordinates are stored in the vertex as attributes of the vertex. (And generally the same UV coordinates that you get in your UV unwrapping are used for all that stuff).
Also, if you have trouble with Vectors or matrices, I have videos on my YouTube channel for that. And I have a video tutorial series on HLSL where I go into great detail on much of what I discussed here and the specifics of how it is coded. I go as far as texture maps and that’s it. Normal maps would have been next, but all the maps use the same UV coordinates and thus they are almost the exact same thing as a texture map. A texture map uses the pixel at that point in the image to describe the color of the pixel on the face of the triangle. The other maps use the pixel at that point in the image to describe DATA of the pixel on the face of the triangle, such as the direction it faces, which is what a normal map is. Either way, you are still just mapping a pixel of data in the image to a pixel being drawn on the face of the triangle. It’s just an epiphany when you realize you can map other data to that pixel besides just color. Anyway, OGL uses GLSL instead of HLSL, but the basic concepts are the same. So, you could skip the first HLSL video and go straight to the one where I start talking about triangles. It wouldn’t hurt you to learn HLSL, but I have a bare bones OGL engine on my website as a Visual Studio project you can download that includes the GLSL version of pretty much the exact same shader. But even if you don’t download that, the videos would probably teach you a lot about the basic concepts of how 3D graphics get drawn to the screen. Everything more advanced makes use of that stuff.