is this a driver issue?

Environment: Win2000, service pack 2; 1.33 Ghz Athlon with a gig of RAM, GeForce 3 with 28.32 drivers; Visual C++ 6.0 with whatever their latest service pack was…

I just noticed this problem with my simulation engine: executed under a release build it runs fine; under a debug build executed in the debugger it runs fine; but under a debug build not executed thru the debugger it crashes inside the OGL driver.

What is strange is that I just rendered the scene a “moment” before completely fine. Now I’ve performed some work that I want to see a progress message on, so I render the same scene with my progress message added to the “console” text I render over the OGL scene. But this render crashes inside the OGL driver trying to access memory that looks like my geometry addresses shifted right 8 bits! I’m not even up to the portion of the logic that would be rendering the console text yet.

So strange… at this point I’ve only loaded 2 geometries and I’m in the process of loading my 3rd. That’s what the progress message is for, just telling me where in the load logic I’m at while parsing geometry 3. Geometry 3 is not affecting the render pipeline yet- I don’t add it to the renderable list until it is completely loaded. I’ve already rendered the scene with the previous two geometries 3 times for other status messages. Just this time around, I’m getting a “memory could not be read” at address 0x09f5f000 crash. The bizarro part that has me questioning the driver is that when I examine what memory I’m getting for my geometry data I have address ranges like:

0x9f412b8 to 0x9f4a2b8, <- first geometry
0x4f095f0 to 0x4f0c5f0,
0x9f412b8 to 0x9f4a2b8, <- 2nd geoemtry
0x4f095f0 to 0x4f0c5f0,
0x9f50420 to 0x9f56420,

And it is after submitting the 1st geometry’s vertex & color array pointers that I get my crash inside glDrawArrays() trying to access memory at 0x09f5f000…

That sure looks like an address that would be inside my vert array if the address were shifted 8 bits… which of course makes no sense…

But the application runs fine in release and debug (inside the debugger)…

Does this sound like a driver issue? I’m not seeing how I can track this any further…

Originally posted by bsenftner:
[b]
I’m getting for my geometry data I have address ranges like:

0x9f412b8 to 0x9f4a2b8, <- first geometry
0x4f095f0 to 0x4f0c5f0,
0x9f412b8 to 0x9f4a2b8, <- 2nd geoemtry
0x4f095f0 to 0x4f0c5f0,
0x9f50420 to 0x9f56420,

And it is after submitting the 1st geometry’s vertex & color array pointers that I get my crash inside glDrawArrays() trying to access memory at 0x09f5f000…

That sure looks like an address that would be inside my vert array if the address were shifted 8 bits… which of course makes no sense…
[/b]

Your last pointer of the second batch, spans from 0x9f50420 to 0x9f56420. Address 0x09f5f000 is the start of a page not far away from that last pointer (in general, a memory page spans for 4K and they begin in 4K aligned memory addresses). The error message tells you that it’s accessing an invalid page.
I think that your parameters to the glPointer call when you use that buffer (stride, size, etc) are wrong, so they make the gl driver to access uninitialised memory areas (unmapped pages). It could be any other glPointer call that is going wrong for a long shot, try reviewing the parameters of each glPointer call glPointer you make. It could even be some glDrawElements call.

The reason why it works within the debugger, is that the debugger may allocate some memory in that page and make that a valid address, so you won’t notice the problem but for the wrong rendering going on (but this could be not very obvious).

Other than that, do you use compiled vertex arrays? Have you tried using glFinish after each glDrawElements call?

I finally fixed it, after quite some time tracking what the hell could be going on. I post this here in the hopes that it helps someone else with finding a simular problem in their own code.

The problem was that GL_TEXTURE_COORD_ARRAY was enabled during the rendering of an object that had no textures nor any texture coords. So the GL driver was attempting to access UVs where there were none.

What’s got me scratching my head is the fact that at the point in the application where the crash occurs, the GL_TEXTURE_COORD_ARRAY has never been enabled… so something, such as my 3D party font system, must be leaving it dangling…)

Anyway, now it works.

i have got code that would of tracked this down quite quickly,
what it does is log the current opengl states,
only valid till opengl1.1 plus a few extensions mail me if u want it sexybastardREMOVE@xtra.co.nz

Yes, a smart version of glTrace would be interesting. SOmething that can analyse your opengl commands and tell you what stupid thing you did (during run time or perhaps a kind of compiler for your source code).

V-man