Previously working code is now giving me issues after I tried to simplify the API I’m using, not sure what I did wrong but could someone try seeing if they can spot what’s causing these messages to pop up:
Edit 2: Spotted an oversight, now it crashes at the attempt to compile a shader, presumably the 1st since nothing has changed about the shader code since the time I had temporarily stopped learning opengl to learn zlib decompression and png de-filtering (latter currently fails on non-8 bit colour depths)
GL_INVALID_VALUE is generated if index is greater than or equal to GL_MAX_VERTEX_ATTRIBS. GL_INVALID_VALUE is generated if size is not 1, 2, 3, 4 or (for glVertexAttribPointer), GL_BGRA. GL_INVALID_VALUE is generated if stride is negative.
Didn’t occur to me since the same code had previously worked so I figure something I did in the creation path buggered things up, since I had done a major change in the memory management system so that I could guarantee all memory is released before the app exits I’m assuming I mishandled the pairing of that with the existing code, I just figured maybe someone here could spot the 1st potential gl call where things were actually screwing up as opposed to these messages that occurred later like a ripple after the fact
Where did you get Rust from? I’ve been using C only from the start, anyways I’ll give that a try later, I’ll 1st fix a crash that is occurring now that I fixed an oversight in the new memory management system
Narrowed down the source of the crash after some debugging, turns out a couple of variables that were supposed to be updated were not resulting in the same object being assigned all over the place, I gotta change my sleep cycle for a new job so I’ll be going to bed now, just in case someone decides to look into it while I’m sleeping I’ll upload the latest changes and edit the link into this comment, either way this comment will help me remember where to look tomorrow.
Edit: Here’s the link, I fixed another oversight but it didn’t resolve the crash, likely the same problem as what I did was inform Series->Using->Block->used of how many bytes are used so that block
expansion doesn’t wipe out the states to 0
Edit 2: Turned out a variable I reused was used later expecting it’s original value and I forgot about that, adding a new variable called ‘add’ and using that instead resolved the problem, the variable was ‘num’, the function ‘ObtainBlock’.
Having re-read my comment after sleeping I realise I forgot to actually mention the members that weren’t updated correctly, they were BLOCKS->Block.used & BLOCK->Using.Used.abs
Edit 3: Now that everything CPU side works as expected I’m now having an empty viewbox issue and since the GFX card is not telling me anything I don’t where to look right now, current code is in the link I added above this edit, I welcome any help on the matter.
Edit 4: Figured out why I was getting an empty viewbox, somehow managed to delete the code that actually adds the generated triangle data to the different “Models” - The models are given strips (easier to say than meshes and frankly more intuitive in regards where it gets used), each triangle occupies the 1st and only strip for each of those models, I’m doing it this way to make it easier to wrap my head around how to connect the rendering to the loaded models and scenes though I’m choosing to allow each scene to have sub scenes which can have further sub scenes, this way I can split huge maps into little scene files to quickly load & unload whenever the view point changes with only local scenes staying in memory for the sake of processing foes, NPCs and collisions