Per vertex parameters 4 vertex shading

I’m currently busy writing a vertex shader in Cg for ogl under Linux.

I was wondering how one can send per vertex data for the shader to work with. Currently I’m piggy backing on the alpha color value, but i would like to use some other means.

I tried using VertexAttributes but that sh*t is just a mess. IT SIMPLY DOES NOT WORK!!
Futher more I tried using a TEXCOORD. Forget it, ogl simply crashes. Any ideas what I might be doing wrong??

I have determined that glDisableClientState( GL_VERTEX_ATTRIB_ARRAY1_NV ); causes the crash. If I take that out then the data simply does not get through but at least no crash.

code: ------------------------------

glDisableClientState( GL_VERTEX_ATTRIB_ARRAY1_NV );
glEnableClientState( GL_VERTEX_ATTRIB_ARRAY1_NV );

//Render stuff

glDisableClientState( GL_VERTEX_ATTRIB_ARRAY1_NV );<-- causes crash after X amount of frames.

I am using the correct memory allocated by glX and my parameters are correct. Has anybody tried something like this before?

Using vertex attributes should work fine. How are you specifying the pointer to your vertex array for vertex attribute 1?

Are you using the Cg runtime or just compiling your shaders offline using cgc and uploading them using glLoadProgramNV? If you are using the Cg runtime, you should be using cgGLSetParameterPointer along with cgGLEnableClientState and cgGLDisableClientState when using vertex arrays.

Cool, I havent seen this approach yet. Using Cg! I use opengl to send along extra vertex information in the form of vertex attributes, but reading quickly over the Cg method in the manual I can see that that would work perfectly. ( Now lets just hope it works “physically” )

I still havent fixed my problem. In more detail this is what I tried:
glEnableClientState( GL_VERTEX_ATTRIB_ARRAY1_NV );

for_each( r_cLinks-&gt;begin(), r_cLinks-&gt;end(), mem_fun( &glLink::RenderPackets ));


glDisableClientState( GL_VERTEX_ATTRIB_ARRAY1_NV );

// And then :
void glLink::RenderPackets()

// other stuff…

glColorPointerEXT( 4, GL_FLOAT, 0, r_pPacketsVB->u_iNoElements, r_pPacketsVB->u_pColorBuf );
glVertexPointerEXT( 3, GL_FLOAT, 16, r_pPacketsVB->u_iNoElements, r_pPacketsVB->u_pVertexBuf );
glVertexAttribPointerNV( 1 ,1 , GL_FLOAT, 0, r_aPacketVtxAttrib );

glDrawRangeElementsEXT( GL_TRIANGLES, 0 , r_pPacketsVB->u_iNoElements, c * 6, GL_UNSIGNED_INT, r_pPacketsVB->u_pIndexBuf );



r_aPacketVtxAttrib consists of memory alocated by my memory manager with a pool of memory allocated by glXAllocatMemoryNV because I’m using GL_VERTEX_ARRAY_RANGE_NV. What I’m trying to say is the memory pointed to by r_aPacketVtxAttrib does exist and is the right size.

I use Cg at runtime. The glEnable( GL_VERTEX_PROGRAM_NV ); line is apparently neccesary if you are using Aliases and has nothing to do with Cg or my vertex shader.

I was looking to grab the info using a Cg program that looked like this:

struct vertin
float4 pos :POSITION;
float4 col :COLOR;
float attr1 :ATTR1;
//float texCoord :TEXCOORD2;

Usually my per vertex value was sent over using the col.a field. Then I attempted to use ATTRx/TEXCOORDx but that would simply Crash my program.

I’l try the Cg method now

don’t mix generic attribs and standard named attribs, so no POS,COLOR,TEXCOORD and at the same time ATTRIB1…16 (or how much ever). they are aliased on some how, and behaviour is undefined if you mix them…

as far as i remember. i could be wrong, though…

Vertex attribute 1 occupies the same locationas the COLOR attribute so you won’t be able to send two vertex arrays using a single attribute. Try using vertex attribute 8 (TEXCOORD0) or using glTexCoordPointer so that you are using all standard vertex attributes.

What Cg profile are you using? If you are using the arbvp1 profile, then you should be using the ARB versions of these functions. You also aren’t guaranteed that specific attributes alias to the corresponding conventional OpenGL attribute. You should either use all generic vertex attributes or all conventional OpenGL attributes as davepermen suggested.

If you use the Cg runtime for specifying vertex pointers, it will do the Right Thing and select the correct attribute depending on the input semantic specified for the parameter.

Thanx for the heads up but I already know that That is why I use attrib1 ( instead of attrib0 for instance which would be the position ) which is the vertex weights.

I am currently implementing the cg method, but am unsure of the preformance. I use ogl’s vertex arrays as they are “apparently” DMA’ed out of memory to video memory which is appearently faster. ( They even might exist in video memory from the getgo? )Whether or not most of these things actually work in Linux is still questionable.


The cg implementation does EXACTLY the same as the ogl one. renders 1 sec worth of frames and then crashes. Now I’m beginning to lose it; Is it me or the drivers??

r_cgVertexAttrib1 = cgGetNamedParameter( s_cgProgram, “attr1” );
cgGLEnableClientState( r_cgVertexAttrib1 );

for_each( r_cLinks-&gt;begin(), r_cLinks-&gt;end(), mem_fun( &glLink::RenderPackets ));

cgGLDisableClientState( r_cgVertexAttrib1 );


cgGLSetParameterPointer( s_pglNet->r_cgVertexAttrib1 , 1, GL_FLOAT, 0, r_aPacketVtxAttrib );

glDrawRangeElementsEXT( GL_TRIANGLES, 0 , r_pPacketsVB->u_iNoElements, c * 6, GL_UNSIGNED_INT, r_pPacketsVB->u_pIndexBuf );

vertout main( vertin vertIN,
uniform float4 linkDir,
uniform float4x4 transMat,
in float attr1 : ATTR1 )
vertout vertOUT;
//float4 offset = linkDir * ( vertIN.col.a );
float4 offset = linkDir * ( attr1 );



What is going on??


[This message has been edited by sl4ker (edited 09-15-2003).]

[This message has been edited by sl4ker (edited 09-15-2003).]

Its possible that you’ve come across a driver bug. Try changing the input semantic from ATTR1 to TEXCOORD0 (no changes needed to Cg runtime code) and see if that works.

If not, please send me a copy of the demo and I’ll see if I can reproduce the problem.

Nope that does not work.

I checked the Cg manual’s example and their they do EXACTLY what I’m doing but mine does not work. No debugging possible libGL dumps core.

Every thing runs but as soon as I enable those client states things mess up.

Sleepy time now, I will continue tomorrow.

Ok I’ve figured out the crash:

Firstly Basics; you cannot set pointers within a EnableClientState / DisableClientState block in both the Cg and ogl implementation.

Secondly; Before setting the ClientPointer a flush must be called. Otherwize you could be screwing with the memory being used to render a polygon that has not yet been finished.

Problem: Although the app does not crash anymore I still have no data going through… I get wierd stuff at the other end that I cannot explain yet.

[This message has been edited by sl4ker (edited 09-16-2003).]