VBO Indexarray crashes on nVidia

wow… this app just kill Windows. Render few frames, and screen goes black. All I can do is hard reset.

Sorry about that!

NP :slight_smile:
This app crash my P4 Dual core + 7600GT (PCIE) + 155.19 drivers, but it works on my laptop (with GF5600-Go and 96.89 drivers)

2 mio vertices
Mio?

The limit stayed at 260K quads, it did not decrease. So i assume it is not an issue regarding memory consumption.
Sounds like some internal counter is being overloaded. Possibly around 2^18th bits in size?

I’m curious: what happens if you make it two draw calls?

This app crash my P4 Dual core + 7600GT (PCIE) + 155.19 drivers, but it works on my laptop (with GF5600-Go and 96.89 drivers)
I wasn’t aware that nVidia’s driver versions had gotten up into the 150’s.

Originally posted by Korval:
I wasn’t aware that nVidia’s driver versions had gotten up into the 150’s.
The latest beta compatible with GF7 and XP is 160.03 Bad thing is that the old classic control panel is gone and only the new monstrosity remains, at least it was reworked for the latest version so it is more intuitive.

On my system (Dualcore Athlon 64 X2, Win XP, GF7800GTX) with 91.something drivers the tested application worked (at least some long box was visible if that is what it should display) with Test_100K and caused blue screen with the Test_300K.

After upgrade to 160.03 the application shows nothing and the message box about start of rendering that was displayed with old drivers is not shown. Other OGL applications I tested still work.

mio = million
Is that abbreviation not used in the english language?

I haven’t tested using more drawcalls explicitly in this app, but my real application, that had the same problem, used a few hundred drawcalls and only rendered a few hundred to thousand triangles per call. I will test it with more drawcalls soon, but i haven’t done it yet.

Komat: yes you see a “long box”. It’s actually thousands of quads one after another.

Interesting, that it doesn’t show anything with with newer drivers, not even the MessageBox.

Hm, so you suggest, that it is possible nVidia has internal counters, that have lower range? If that’s indeed the problem, i can work around that. An official statement from nVidia, whether this is true, would be cool. I am just a bit confused, because it works with indices in system memory.

Thanks for being so brave to test it.
Jan.

In DX capability viewer the Nvidia drivers on GF7800 report that they support maximal vertex index of 1048575 (0xFFFFF in hexa) so if you use bigger index in the OGL, the driver must do some magic with that and from results it seems that it fails to do that correctly.

Well, 260K quads = 4*260K vertices = 1040000 vertices. A bit more and it crashes.
That absolutely matches my observations.

I still don’t get it, why it should work with indices in system memory, but well, what would a world without mysteries be?

Thank you Komat!
Jan.

still, never mind eh?

Jan,

we’ll take a look. Thanks for the test app, that helps a lot getting to the bottom of this!

Barthold
NVIDIA

ah general question - is it a good idea to store indices in graphics card memory? i’m quite sure someone from the forums advised me not to do it.

Yes, it is a good idea to store indices in graphics card memory. If the graphics card does not directly support this (geforce 3/4?) then the GL driver can stick them in system memory for you, so it shouldn’t require any effort on your part.

@Barthold: Great, thank you. If you need the full source for the test-app, i can trim it down a bit and reduce it to a stand-alone GLUT application. Just let me know, whether that would help you.

@knackered: I don’t get it, what do you mean?

Jan.

sorry, I was just bored, forget I said anything.

Was there ever any resolution to this one? I seem to be bumping up against a similar issue – crashes GeForce, but survives Radeon. Here are the known data points in my case:
GeForce 7300gs, Intel Express 965, Mesa3D linux: crash
Radeon x1600, Quadro, FireGL: works
I’m putting both vertices into a GL_ARRAY_BUFFER_ARB and
indices into a GL_ELEMENT_ARRAY_BUFFER

glExpert reports nothing useful.
gDebugger comes up clean.
the crash leaves no useful stack.

I’m concentrating on GeForce for the moment, since that’s what is currently on my desktop. Driver is dated 12/5/2007. Driver version is reported as 6.14.11.6921 (is that right? I don’t remember NV driver versions being formatted like that).

Hi there

I don’t have the whole thread in mind anymore, but i assume i mentioned, that i partition my vertex-arrays now into chunks each containing < 2^16 vertices. That solved the problem on all nVidia hardware, that i tested it on (Geforce 7 and 8, though 8 had never the problem in the first place).

Whether more recent drivers solved the issue, i don’t know. I never got any additional feedback from nVidia.

Since using vertex-arrays with < 2^16 vertices gives much better performance anyway, i didn’t mind about the bug anymore.

Jan.

You are not using the nvidia drivers, but the mesa3d software implementation. Make sure your graphics card drivers are installed correctly.

Using mesa3d software was on purpose. We wanted an implementation we could debug into to try to decode what was going on.

It’s possible that this is a different bug from the one that Jan originally posted. In my case it doesn’t need to be big data.

Now it’s starting to look like multiple different bugs for the separate platforms. On each of the 3 crashing platforms, slightly different actions trigger the crashes. I believe I’ve identified what is causing the Mesa3D crash – it’s a buffer overrun (our fault). In the case of the Intel and GeForce crashes, though, I am triggering it with code that I know should not be causing buffer overruns.