Vertex Array Error

didnt ello ello descenfd into happy days (bottom of the barrel) there at the end , where the actorr/esse would come onscreen + the tv audience would appluaud until the first adbreak.
not knocking the fonz though hes cool.
if i ever get had up on a charge for the court + thye majistrate asks me how do i plead i intend to
stick both my thumbs up and say a big long aye!!!
saddam should use this tried and tested technique.

magistra - "what do u say about gassing 1000s of you countrymen in basra’
saddam - (thumbs up) ayyyyyye

how can he lose!!,
hopefully the mods wont close this insightful topic

Originally posted by knackered:
I will happily do the same to you def if you persist in attacking me.
I couldn’t care less…

To Rodix: If you are still reading this, please excuse everybody going of topic, let us know what caused the errors. To me it seems the error lies somewhere else but the OpenGL code. Good luck!

Thanks Def.
It’s a shame that there are people like Knackered in this forum. I feel sorry for him. Let us all hope he gets some proffesional psychological help.

Anyways, the error was not caused by the C++ code. Jide was right (thanks! and thanks Relic too!), the problem was with GL_MAX_ELEMENTS_VERTICES. There is actually a limit of the number of vertices that glDrawElements can handle.
The recommended for my GPU is GL_MAX_ELEMENTS_VERTICES = 4096. Using more than that number reduces performance and using much more than that number causes the 0xC0000005: Access Violation error.

Originally posted by Rodrix:
I feel sorry for him.
Could have fooled me. Anyway, I don’t want pity just because I’m confined to wheel chair.

Originally posted by Rodrix:
The recommended for my GPU is GL_MAX_ELEMENTS_VERTICES = 4096. Using more than that number reduces performance and using much more than that number causes the 0xC0000005: Access Violation error.
No, it should never cause an access violation. If that’s what you’ve discovered, then you should report it as a driver bug…but, after you’ve thoroughly checked through your own code for other problems such as the ones Relic picked up.

zed, I honestly think there’s a sitcom in the whole saddam imprisonment thing. It’s just a matter of time.

Originally posted by zed:
i feel sorry for poor old relic.
Me? That’s all relative. :stuck_out_tongue:
(It’s not my problem, I was only the first answering.)

Rodrix, knackered is right about the GL_MAX_ELEMENTS_VERTICES. That was introduced for glDrawRangeElements and must not crash if exceeded.
The values you got are way off what current hardware can do, it should be more like 64k or 1M indices. I wouldn’t use 4k batches if I can have bigger ones.
You might want to report that to NVIDIA if you’re really sure it’s not your code.
Strip your code down to the absolute minimum number of OpenGL calls required to reproduce the problem, sometimes that clears things up.
Also try if the same thing runs using a pixelformat from Microsoft’s GDI Generic OpenGL implementation. If that crashes as well, it’s probably not a driver issue.

Though I don’t usually personally endulge in quite so ruthless sarcasm concerning people I don’t know, I find Knackered absolutely hilarious and I recount his posts to my friends (and my mum) who also appreciate them immensely. There’s really no reason to take offence, or to become offensive (particularly in such a tasteless manner). Based on the evidence provided here, I would vouch for Knackered’s mental health, I would describe it as vigorous. He can certainly make valuable contributions to this forum (beyond the much appreciated humour) and I think you ought be grateful that he does.

Guys… I FOUND THE BUG! :slight_smile:
This was the hardest bug I ever had to find.

I fixed the problem by calling

glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);

before glDrawArrays.

For some unknown reason, Normal and Texture Arrays were enabled, and the program tried to read them and called the Access Violation Error when it couldn’t read any more. Apparently both arrays were defined to some address in memory, that could be read up to 16623 number of items (using more than 16623 particles it crashed).

However, the sample code I am using, did not contain a single glEnableClientState call; therefore, how is it possible that I had to Disable some client states in order to make the program work?

The program indeed contained a glDrawArrays with an InterleavedArray call using GL_T2F_V3F and an array of size 16. Yet, there was no explicit glEnableClientState call for the texture and vertex arrays. Therefore I commented out the code, and still the Access Violation error ocurred. :confused:

Therefore the only solution was to add glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
before my glDrawArray call.

I know now how to fix the bug; however, I don’t understand WHY it happened, taking the fact there was no explicit glEnableClientState for Texture and Normals. Are this states on by default? (or are they implicitly called in any other standard call?)

Thanks so much everyone.

P.S: Knackered, I don’t mean to have any personal problem with you. However, please don’t use that type of ‘humour’ with me as it deeply offends me. Thanks.

The call to glInterleavedArrarys enables those states (see the redbook).

…yeah I supposed that…
But how come when I comment that line they are still enabled?
Do the states persists even after you reinitalize OpenGL?
Thanks!

Directly after you initialize OpenGL, check what glIsEnabled(GL_NORMAL_ARRAY) and GL_TEXTURE_COORD_ARRAY returns. They should be disabled (as should all arrays) after initialization of OpenGL.

Just spread a thin layer of such checks over your code and bake at working temperature until crack is heard from an exploding small bug. Remove it with a small surgical instrument (a +45 battleaxe is know to work), and enjoy the pretty meshes.

…should I add sugar? :slight_smile:

Oh noooo guys!!!

All this time spent for nothing!
There is no significant increment in performance in my programs using Vertex Arrays!!!
I get almost the same FPS count (or ms) than in Immediate mode :eek: (…

Should I go for VBO?
My program is meant for people that are not computer fans, and probably use default GPU cards that come with the computer. Is the VBO extension common in those types of GPU’s?

Any comments, advices, or recommendations are welcome!
Thanks!
Rod

If you did not see any improvement going from immediate mode to vertex arrays you’re unlikely to see an improvement going to VBOs. Vertex processing is clearly not your bottleneck. I’d guess you’re either fillrate limited or CPU limited. Both are pretty common with particle systems.

Well that is -in a particular way- good news since I really didn’t want to go with VBO, since Vertex Arrays was really a pain!

Could you explain what is fillrate limited?
Thanks in advance!

P.S: Isn’t VBO and extension that processes the vertexes and loads in the buffer of the card, so that it is so much faster than Vertex Arrays. Nehe says it multiplies FPS x3 . How do you know in such advance that it wont improve performance? (I want to learn :slight_smile:

Using an interleaved vertex format is good, but you should also use format that the hw likes. Most (if not all) want ubyte color, not float.
Use
glColorPointer(4, GL_UNSIGNED_BYTE, …, …);

Even if you don’t get an improvement, it’s best to do so. VBOs are for putting the data in AGP or VRAM and in the worst case, system RAM. If you have PCIEx, system RAM or VRAM. The driver decides. Read the wiki.

VBO is core since 1.5
It’s ridiculous to continue using glVertex.
There is a desire to kick out vertex arrays from drivers.

http://www.gamedev.net/columns/events/gdc2006/article.asp?id=233

What features to consider layering:

    * Immediate mode
    * Current vertex state
    * Non-VBO vertex arrays
    * Vertex array enables (with shaders, this should be automatic)
    * glArrayElement()
    * glInterleavedArrays()
    * glRect()
    * Display lists 

Originally posted by Rodrix:
Could you explain what is fillrate limited?
Basically you’re limited by the amount of pixels rendered, rather than the amount of vertices.

Originally posted by Rodrix:
P.S: Isn’t VBO and extension that processes the vertexes and loads in the buffer of the card, so that it is so much faster than Vertex Arrays. Nehe says it multiplies FPS x3 . How do you know in such advance that it wont improve performance? (I want to learn :slight_smile:
Because it’s not your bottleneck, as proven already by not seeing a performance increase by going from immediate mode to vertex arrays. To illustrate, assume you have to build a hundred houses and eat one cookie. If you can eat the cookie twice as fast it won’t make you complete the total task noticably faster because building those houses is what really bogs you down, not the cookie eating. :slight_smile:
So while VBOs may improve the speed at which the GPU can process vertices it won’t speed you up when the biggest chunk of work for the GPU is to fill all those pixels.

Originally posted by V-man:
Using an interleaved vertex format is good, but you should also use format that the hw likes. Most (if not all) want ubyte color, not float.
Use
glColorPointer(4, GL_UNSIGNED_BYTE, …, …);

Well, floats for colors are fine, but ubytes may be slightly faster because they are smaller.

Nehe says it multiplies FPS x3 . How do you know in such advance that it wont improve performance? (I want to learn :slight_smile:
perhaps 3x in an optimal situation eg benchmark (but even then that sounds optimistic)
FWIW in my app (which has lots of geometry, higher than doom3/UT2004 etc)
going from immediate -> VA = ~300% increase
going from VA -> VBO = ~5% increase

Originally posted by Humus

Well, floats for colors are fine, but ubytes may be slightly faster because they are smaller.

ubytes are much much slower on NV3x, NV4x, R3xx hardware. I don’t know whether that improved on R4xx but they certainly crippled performance on older hardware. I stress tested it out with a few hundred million vertices/s and i don’t know for sure whether a smaller number would give better performance. It might, but that won’t be very scalable.