Hi. (Sorry this is so long.)
I’m new to OpenGL, & I’m seeking advice on choosing the best methods of doing everything. First of all, I plan on using C#, & GL 3.0+ (even if I have to p/invoke the bindings myself before Tao/OpenTK does it…) I figure that once I’m finished with my project, that it will be reasonable to require the people using it to have a video card / driver that supports GL3… So if anyone could tell me, I need to know what design choices would be best to base my architecture off of that would be easy to migrate to GL3 to take advantage of it over the current “build a wrapper around the state machine” approach. I’ve looked at many useless tutorials that teach you how to code something in GL 1.1, & some in 2.1, but I find that unfortunately most tutorials focus on making sure the code is easy to understand, rather then actually showing you the most efficient way of doing something & I couldn’t find much speculation on tutorials for openGL 3…
I am probably going to be drawing a lot of triangle strips in realtime with dynamically generated vertexes, building terrain systems, particle systems, probably going to design procedurally generated animations, rag doll physics, rigid body physics, etc… I’m probably going to be updating some of this information from a server in real-time. My point being that I’m going to be dealing with a lot of dynamic influences, I’m not going to be rendering much in the way of static meshes…
Are these wise choices?
VBO vs. Indexed Array: I herd GL3 is going to be more based on VBOs then plain old indexed arrays, but I also herd that VOBs are slower to write to then arrays, so are they the best choice for very dynamic meshes? I’ll explain more about the format I’m going to be rendering later… I herd immediate mode is as slow as hell, & will probably be more of a legacy thing then anything else in GL3…
Shaders: I’m also going to be taking a very shader oriented approach, but use as little “if” statements & no loops if possible, if I need those, I can write my program to dynamically generate source code for new, more optimized static shaders on the fly, hopefully increasing the performance of the pipeline.
W-Buffering vs. Z-Buffering: I’m sort of confused about this, but what I can make of it, W-buffering is better then Z-buffering, but most everyone uses Z-buffering because it’s more widely supported…
Quaternion vs. Matrix: From what I’ve read, quaternions are the best thing for what I’m going to be doing, best for skeletal animations as well as physics simulation, anything were your going to have multiple rotation influences on the same vertex. What I’m not sure of is can I only use quaternions? I think I need to make a projection matrix in order to make distant objects appear distant, but do I need a matrix for anything else? can I possibly use a quaternion instead of a world projection matrix? Most of the geometry on the screen is probably going to be under the influence of quaternion rotation, so would it be faster to do this, or can you even use a quaternion for the world projection at all? What would be the most efficient way of rendering this stuff?
I think I’m going to build just about everything out of geometry made from a grid. I’m going to treat meshes a lot like textures in the way of them having (2 to the power of X) times (2 to the power of Y) vertex in each mesh in a 2d grid formation. I’m actually going to be able to use some sort of NPatching algorithm to dynamically generate new detail to the geometry, while also being able to create a sort of what you would call a geometry mipmap by creating a separate indexed array that I can then feed to the physics engine, or I can use it for LOD without actually having to store a separate version of the mesh in memory.
Here is what the mesh would look like in an array:
[0,1,2,3] +[4,5,6,7,8,9,10,11,12,13,14,15] +[16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63]
This is what it would look like in geo mipmaps based ont the above array:
Quasy Indexed Array A: [0,01, 02,03]
Quasy Indexed Array B: [0,04,01,07, 05,06,08,09, 02,10,03,13, 11,12,14,15]
Quasy Indexed Array C: [0,16,04,19,01,22,07,25, 17,18,20,21,23,24,26,27, 05,28,06,31,08,34,09,37, 29,30,32,33,35,36,38,39, 02,40,10,43,03,46,13,49, 41,42,44,45,47,48,50,51, 11,52,12,55,14,58,15,61, 53,54,56,57,59,60,62,63]
This is more a pseudo representation of the mesh, it’s going to be drawn with this sort of grid pattern to it, rendered like a terrain is usually rendered…
|/|/|/|/| |/|/|/|/| |/|/|/|/| |/|/|/|/|
(I’m sorry if this next part confuses you, if you don’t get what I’m talking about, say so, I’ll draw a diagram to make things clear it’s late so I can’t explain well…) I’m familiar with 3D Modeling, so I have a general understanding about how texture UV mapping works. What I do not know is lets say I want to create a cylinder with my example “Quasy Indexed Array B”. So I start by drawing a triangle using vertexes 0,4,5 then 4,6,5 & so on until I come to the end of the row. I want to draw a triangle from vertexes 7,0,9 but if I do that, it will use the texture coords from vertex 0, so that triangle will have most of the top right of the texture on it, & it will also be inverted. Would I have to have a separate texture mapping coords for vertex 0, if I want the cylinder to wrap around it? what I would want is to have vertex 0 in triangle 0,4,5 to have the texture coords of 0,0 then in triangle 7,0,9 have the texture coords of 0,1… Maybe I can figure this out later…
Anyway, Thank You very much for reading all this, & hopefully you have some answers.