VBO troubles :-)

Well, i wouldn’t like to bring some bad news concerning the VBO extension but… :wink:

the specs are saying: 2.8A.1 Vertex Arrays in Buffer Objects
--------------------------------------

Blocks of vertex array data may be stored in buffer objects with the
same format and layout options supported for client-side vertex
arrays.  However, it is expected that GL implementations will (at
minimum) be optimized for data with all components represented as
floats, as well as for color data with components represented as
either floats or unsigned bytes.

Blah blah which sounds like VBO is a VAO sequel! ;-))
Anyhow, let’s imagine i really don’t care about using floats &| bytes where it is intended to with the following vertex format ->

typedef struct SVR_VERTEX
{
VR_FLOAT x; // $0
VR_FLOAT y; // $4
VR_FLOAT z; // $8

VR_BYTE			r;												//Fantastic colors..					$0c	
VR_BYTE			g;												//										$0d
VR_BYTE			b;												//										$0e
VR_BYTE			a;												//										$0f		

VR_CHAR			nx;												//non transformed normal (-128,127)		$10
VR_CHAR			ny;												//										$11	
VR_CHAR			nz;												//										$12
VR_CHAR			rien;											//										$13
															
union {														
	VR_UV			texCoord[4];								//										$14
	struct{
		VR_FLOAT	u0,v0;										//										$14,$18
		VR_FLOAT	u1,v1;										//										$1c,$20
		VR_FLOAT	u2,v2;										//										$24,$28
		VR_FLOAT	u3,v3;										//										$2c,$30
	};
};

}VR_VERTEX;

and then, using 44.03 win2k + GF2GTS would it be possibly the reason why it generates an abnormal prg termination?? :slight_smile:

tchOo!

What compiler and options are you using?

Your structure might be larger than you think it is because the compiler might be aligning all the members on 2 or 4 byte boundaries.

Whatever you do wrong, you shouldn’t get an abnormal program termination. Try to upgrade your drivers.

How should the layout of the struct show where a problem in your VBO usage is?
Put some code and explain the crash condition.

Well, i really think it is an implementation error. I’ve updated to 44.22 beta and it still occurs… As i said, it reminds me a bug from the VAO :wink:

more details here -> http://www.orkysquad.org/main.php?id=lire&templ=templCode&nom=ozzy&cat=Code&month=July&year=2003#623

thx for the replies anyway. :slight_smile:

glNormalPointer(GL_UNSIGNED_BYTE,sizePrim,BUFFER_OFFSET(0x0c));

GL_UNSIGNED_BYTE isn’t a valid type for glNormalPointer - check your OpenGL documentation.

Well done dude! :wink:
That was it! it doesn’t crash anymore. :slight_smile:
thx!!

Ok, i’m now facing rendering problems with the normals using GL_BYTE, GL_SHORT and GL_INT while lighting is enabled. :frowning:
Anyone the same problem?

Moreover, using the same data structures with CVA or other mechanism instead of VBO there is no problem… argh… :slight_smile:

the shots (same url) http://www.orkysquad.org/main.php?id=lire&templ=templCode&nom=ozzy&cat=Code&month=July&year=2003#623

thx

Moreover, using the same data structures with CVA or other mechanism instead of VBO there is no problem… argh…

Of course there’s no problem; they have to do a copy to video-card accessible memory anyway.

The hardware cannot recognize most formats for most components. It can recognize floats for components, as well as GL_BYTE for colors. That’s it. If you use anything else, it will impose a significant speed hit under VBO. So, don’t do it.

Under CVA, since there’s a copy anyway, they have the ability to convert any data you store into an appropriate format for the video card.

Originally posted by Korval:
The hardware cannot recognize most formats for most components. It can recognize floats for components, as well as GL_BYTE for colors. That’s it. If you use anything else, it will impose a significant speed hit under VBO. So, don’t do it.
.

Well, i agree that there should be some performance penalties when using data components that are not matching the hardware.
As far as i know, early GF boards can support GL_SHORT for normals (using VAR) so i expect them to be treated as is internally.

Moreover, i only got a problem with rendering which is certainly related with how normals dataformat are interpreted in the case of GL_BYTE,GL_SHORT and GL_INT :wink:
In other words, if it works with CVA (even with an internal conversion from the driver)
it should work with VBO and even with speed penalties :))

Finally, regarding FLOAT versus others components types i was amazed to see (using VAR again) that GPU was faster processing short structures including GL_SHORT for vertex coordinates, GL_UNSIGNED_BYTE for colors and finally GL_SHORT for normals vs
a all in GL_FLOAT structure :wink:

Korval you’re pointing out what is written into the specs ->

2.8A.1 Vertex Arrays in Buffer Objects
--------------------------------------


However, it is expected that GL implementations will (at
minimum) be optimized for data with all components represented as
floats, as well as for color data with components represented as
either floats or unsigned bytes.

ok then, not regarding the speed others components should work aswell?

All formats should still function, regardless. If they don’t, then it’s an implementation bug.

[This message has been edited by Korval (edited 07-09-2003).]

There was a problem in some drivers where mixing numbered attributes for some arrays, and legacy names/bindings for others, would make it not work right. I forget the details, but if you’re using both VertexAttribPointer and NormalPointer, this might be the issue.

allright, waiting for the implementation fix (if there is a bug there… nv guys any comment?)

Now, for testing pruposes, i would like to switch from VAR to VBO or any other GL mechanism.
Unfortunately it seems that it causes probs to have VAR and VBO both initialised. Of course, they are certainly using the same memory allocation mechanism (when u need to store data onboard -equ- static) thus, at first i tried to only disable GL_VERTEX_ARRAY_RANGE_NV but using VBO then i get no display anymore… :frowning:
Finally i tried to free VAR memory using:

wglFreeMemoryNV(pFastMem);

but it seems that VBO always fall back to vertexArrays, something like there is no more Vram available = switch to VA.

any idea? :slight_smile:

I’ve got a Radeon 9800 pro. If I use gl_short as an vertex attrib format, the program becomes incredible slow. With gl_float there is no problem. The GF4 has had no problems when using gl_short. Is this a hardward limitation of the Radeon 9800?

Well, it was already that case using VAO…
(much more buggy btw)
ATI implementation seems rather limited in term of functionality regarding the available formats. Thus as suggested by Korval and the specs :wink: you should use float everywhere to get the best results from one board to another. :((

  • as it is written : this is a minimum ;(

Anyhow… what about the speed on your radeon using GL_SHORT? :wink: same as CVA i guess? ;-))

For those interested NV VBO vs ATI VBO using
floats everywhere -> http://www.orkysquad.org/main.php?id=lire&templ=templCode&nom=ozzy&cat=Code&month=July&year=2003#623

conclusion : not really flexible…

jwatte ?

Could you explain in more detail about what driver had problem with mixing VertexAttribPointer and NormalPointer ?

I have a VBO app that runs many times slower than normal vertex array usage ?? Could this be explained with you driver anomaly ?

What kind of struct do you use?
drivers version?
With 44.90 Nv implementation just do like the Radeon implemenations, with customs formats they fallback to VertexArrays mechanisms… (well, it seems too… something similar btw)