How to get the information of the vertices?

To work with low level OpenGL shaders( ARB_vertex_program and ARB_fragment_program ), I need to have the vertex attributes such as vertex’s position, vertex’s normal vector and vertex’s index. An example in the net has used from this manner to specify these attributes:
const float vertices[3][numVertices3] = {
-0.0195502f, 0.25222f, 0.0f,

const float normals[3][numVertices
3] = {
0.557015f, 0.785304f, 0.270244f,

const unsigned int numIndices = 564*3;
const unsigned short indices[numIndices] = {
0, 1, 2,


The problem is that I can’t write such information for all the vertices manually (It’s possible for a simple cube, But it’s very difficult for a complex model ). So is these a simple way to get such informations?

  1. google for free 3d models or make your own (you could try this one:
  2. google for 3d model loading library or make your own (you could try this one:

There, I’ve found myself a new job - googling on demand :slight_smile:

Both of the links are not available for me( unknown resource).

Both of the links are not available for me
Then you’ll have to search for some models/libraries yourself. I just gave these two as examples. Perhaps you’ll be able to find something better. Good luck.

Remove the trailing ) from the links :wink:

:eek: :smiley:

:-)) Yes i need to remove ) from the links :wink:

How do you render your turtle and the world ? All the data should come from somewhere. I don’t know if I remember well, but didn’t you used a 3DS loader or another format ? If that’s the case, then you already have those informations.

I use cal3D and ms3d format to get the information of the objects. I use vertex arrays to draw them.

just to prevent confusion- your question is how you can get access to a fragment’s position from within a shader (the question is not where you can find free 3d models), right?

in that case, you can use the vertex program to assign all data you need to a varying variable. the simplest case would be that the vertex program assigns the vertices’ position, normal etc. to a multitexcoord that you do not need. the multitexcoord- and thus the fragment’s position, normal etc.- will be interpolated and passed to the fragment program.

I guess its question is how to pass the vertex data into both shaders.

I simply know nothing about low level shaders, so I can’t really be of help.

But if that’s a bit like GLSL, then your vertex program has default variables (ie like gl_Vertex in GLSL) that already has the vertex information, for each vertex that is to be rendered whether you use vertex arrays or glBegin(),glEnd().
And as RigidBody said, use a varying variable (same as for GLSL) to pass the vertex from the vertex program to the fragment program.

If I don’t understood your question correctly, do as if I wrote nothing :slight_smile:

The dolphin example in the book “More OpenGL Game Programming” uses ARB_Vertex_program to interpolate 3 vertices with the equation (1-t)(1-t)v1 + 2t(1-t)v2 + ttv3.
It uses a vertex array with the following deceleration:

const unsigned int numVertices = 285;
const float vertices[3][numVertices*3];

As you see, there are three groups of 285 vertexes. So vertex program replaces v1, v2 and v3 with vertices[0], vertices[1] and vertives[2] respectively.
I’m wondering how this program has computed the vertex position of each group.
As I said before, I can get the information of the vertices from 3DSMaX and save them to an array( i,e. vertices[num_vertices][3] ).But as you see it doesn’t consist of 3 groups of vertices as I need for the equation.
Any Idea?

Ehsan, I don’t understand a word of what you say… Vertex programs operate on one vertex in time, not on three. So vertex programs can’t “interpolate vertices” even if it could.

When you send a vertex (using glVertex, glDrawElements or how ever) to OpenGL, it gets processed by the vertex program. The output data is used to rasterize primitives.

Please read the OpenGL specification, it is all explained very clearly inside, you just have to read several times :slight_smile: And please, explain your problem more detailed, if possible with examples. What and why are you going to interpolate?

P.S. Somehow I think that we need a basic introduction for new OpenGL users that would explain how OpenGL works with vertices and data transformation (including shaders), most of questions arise exactly on this field…

P.P.S Well, we still get all that “where to get OpenGL” topics even with the wiki :slight_smile:

Yeah Ehsan, there’s nothing new with shaders but the shaders themselves. They do the work on their own.

As soon as you render with glBegin or vertex arrays, shaders got the informations. You don’t need new things, nothing, really. This is all made on the graphic card. This was what’s new on programmable gpus since Geforce FX. The gpu takes the incoming vertex and compute what you told her to compute threw vertex/fragment programs of vertex/fragment shaders. Whether you use 3DSMAX, Blender or Maya or anything else to get your models, as soon as you can render it, shaders got the vertex informations.

So with reading the specs, you know how shaders can access the vertex, normal or texcoord…

If I’m still wrong, sorry, but I haven’t got the book you stippled. But if you want I can send you a simple exemple about GL and GLSL (no vertex/fragment programs). You might then understand better.

PS: As often in programming, when you’re facing a problem, then simply say it’s not a problem and you’ll soon discover that you’ve seen it from the wrong point of view. Then the problem will fade on its own. :slight_smile:

hope that helps.

Originally posted by jide:

PS: As often in programming, when you’re facing a problem, then simply say it’s not a problem and you’ll soon discover that you’ve seen it from the wrong point of view.


I read it once somewhere: “There are no problems, only opportunities” :smiley:

I have no problem on low-level shaders. Maybe I can not write my question correctly. Our 2 artists in our corporation ask me how to generate the vertex data for our shader engine.So it’s not a shader problem. It’s a problem whith importing the vertex data to the program.
Zengar wrote:
Vertex programs operate on one vertex in time.

It’s correct when we use vertex.position. However we can pass other data to the shader( so other vertex positions) with attributes(even with texture arrays ) and access them with vertex.attrib[n], vertex.texcoord[n], etc. ).

This delphin example passes 3 vertexes at a time to the shader( vertex 1 of group 1, vertex 1 of group 2 and vertex 1 of group 3 in first pass, vertex 2 of group 1, vertex 2 of group2 and vertex 2 of group 3 in second pass …) and interpolate each 3 vertexes and generates one vertex position(So for each 3 incoming vertex positions we have 1 output vertex position ).

Ok, I think I see what you mean now.

What you need each time within a shaders is all of the 3 vertices of a triangle and for vertex arrays not immediate mode. I guess vertex attributes, as you stippled, is the key.

Unfortunately I actually don’t see any real good thing but the fact that you have to stipple a vertex attributes of 3*3 matrices (or 3 attributes of vertices) which will contain the 3 vertices of each triangle, but repeated three times.


default array: v0,v1,v2,v3,v4,v5…
3*3 matrix attr: v0,v1,v2,v0,v1,v2,v0,v1,v2,v3,v4,v5,v3,v4,v5,v3,v4,v5…

I know this is really uggly. And there might have a better solution but I don’t know it.

I found the response( Someone in responsed me ).This example uses three fixed models( So three group of vertices ) and creates a smooth animation between them using the appropriate equation. So this example uses 3 vertexes as input and generates a single vertex as output.