State management, engine design

This is more of an all purpose engine design question.

What design decisions have you made in the past for your latest engine. How do you store manage and render your models.

I need to deal with
-object hierarchies
-opengl lighting and projected texture method
-transparency (yee old problem)
-transfer modes: immediate, display list, nv_specific, ati_specific
-reflection models
-basic stencil shadows, plus …

and I am planning to wrap it in 1 generic class, and derive nv and ati versions from it.

I won’t be doing anything physics (except lighting) related in this project, but that will come up much later.

Any thoughts?

V-man

I’m working on a portal engine. Each sector is split up into cubes of fixed size and polygons are split so that a certain polygon is in exactly one cube. For lighting I only lit the polygons in cubes that the light sphere for each light touches. It’s also quick to do frustum culling this way. Shouldn’t be too hard to do a rough sort by depth either by sorting the cubes by depth, but I haven’t done that yet. It’s also easy to use glDrawElements + VAO/VAR for all geometry this way. So far the only drawback is that I don’t know really how to handle transparency.

[This message has been edited by Humus (edited 10-15-2002).]

u need to total seperate shading from geometry very very important (for flexibilty + performance etc it solves the transparancy issue also)

>>-object hierarchies<<

(depends on what the engine is designed for eg spacegame is totally different from doom3)

>>-opengl lighting<<
i never use

>>-transparency (yee old problem)<<

see above

>>-transfer modes: immediate, display list, nv_specific, ati_specific<<

in my engine every single piece of geometry eg landscape, spaceship, the text in the hud, even the skybox. gets drawn in the same function (which is about 2 lines long) this has onvious benifits WRT to VAR VAO immediate etc

>>-reflection models
-basic stencil shadows, plus …<<

designing an engine is a mammoth task,
basically
-KISS
-do NOT optimize
-try to keep everything seperate (eg to add motion blur to my engine i just have to add a couple of lines (the rest of the engine doesnt need to know about it))
-log everything
-give most things a uniqueID

here are the 4 main parts (of a game)

*scene // camera particlesystem players etc
*os // window input sound etc
*game // mainloop? options console
*renering_manager // textures shaders meshes etc

sorry about the disjointed ramblings (but thats how i am)

i think i might have further ramblings here http://uk.geocities.com/sloppyturds/kea/kea.html

i am new to design, and i had to modify a lot my engine.

But anyway, i think generic classes are good deals.
give hierarchy only with things that are similar. i think anyway.

Currently I’m working on 3d engine too…
Still I didn’t finish the main core part of
it but what I’m planning to do is to have a set of all visible objects in some kind of hierarchy…

Every object has reference to it’s model which can be shared by many objects…And model is something that contains it’s basic strutures and is able to read in from files and display itself…

This solution however has some disadvantages too since in my engine there’s going to be always one root object (like maps in quake…) whoose model contains bsp tree which contains references to other static or dynamnic objects - this destroys a bit an idea of models which can be shared between many objects, but I can’t see any better solution…

I’m still wondering whether it’s good idea to make every object display itself or maybe make some global renderer which gathers all datas from objects to pass them through same rendering pipeline…

sorry for such a short post.

My “engine” consists of four programs. The first one is my level-editor.
The second one is a BSP-compiler, that takes the level-data and creates a leafy BSP-tree from it. This program first checks which polys are never visible and culls them. It also seperates opaque and transparent polys. The opaque polys are stored in an leafy BSP-tree, the transparent in an normal tree (which stores the data at the nodes).

The reason is: I can draw a leafy BSP-tree with opaque geometry from back-tro front and it will work well. But a leafy BSP-tree with transparent polys won´t always look right, because in the leafs the polys are not sorted by depth.

My program loads those two trees. It first draws the opaque objects from back to front only with z-writing enabled, no depth-testing.
Then i draw the transparent objects with full depth-testing of course from back to front.

Of course i use vertex arrays. Because i don´t know the polys drawing-order, i cache them. Everytime i “draw” a poly, i cache it in a buffer. When this buffer is full, i flush it (glDrawElements).

I forgot the fourth program. It is an additional compiler, that takes my leafy BSP-tree (with the opaque geometry), and creates portals for it. I still want to add, that it creates a PVS from the portals, so that i can use the PVS in my engine, but i haven´t done that yet.

All this together is a lot to do, but it works quite well for me.

Jan.

to wrap it in 1 generic class, and derive nv and ati versions from it.

Just a short thought: Do you know the strategy pattern? I’d prefer it clearly to subclassing. You can reuse the render strategy and gain a better separation between renderer, object state and data.

>>>>example for #1, one thing i luv is i go terrain->draw() player->draw() particle->draw() hud->draw() skybox->draw() and they all call the same function, 100% of the rendering is done in one line!, carmack managed ±99% in quake3. sure this aint the most optimal way but it adheres to rule #1 and it beats carmack into the bargain (which is the unwritten law #0) <<<

OK, this is an important issue.

terrain, player, particle, hd, skybox and all the rest are different classes, with their own vertex arrays and textures, and other data. Correct?

When you call draw(), they all use some generic startegy for rendering them? What about quake3. I have noticed that when you launch it, it says

“render with single call to glDrawElements”

How is this possible? In his case, he has maybe 50 textures, with some 100 objects in his maps. He needs to bind to each texture object, and do who knows what to get the multitexturing effects working. Not everything is multitextured I beleive. Maybe some things are not textured at all. He has shaders set up for each object I beleive.

I also think that my old method for sorting for transparency isnt very good. I reserved a memory area for all transparent objects. My engine was made aware of what needs sorting. It sorts by writing each poly into the new memory location.
How has BSP + PVS worked in this?
The only kind of culling Im planning on is object-frustum. The scenes can be anything from open space to indoors.

Got to run now!
V-man

I was thinking of how to implement objects and rendering as well.I’m using C but I suppose the general idea should be the same.
First of all.Regarding the transfer method:what we have to choose from,in essence, are vertex arrays and display lists.From what I have gathered from past threads etc. lists might be slightly(?) faster than arrays but they also eat up more memory and seem to be quite implementation dependent.Arrays on the other hand have the great advantage of simplicity (just a bunch of arrays),generality (can be used for static and dynamic objects) and scalability (correct me if I’m wrong but I think that VAR,VAO are used with arrays so it should be simple to support these extensions w/o introducing compexity in your geometry cache).I think these pros beat the speed advantage of display lists but your opinions are welcome.
Now supposing we opt for arrays we could group vertex,texcoord,color,normal etc. arrays into ‘surfaces’ and store these in a table.Then assuming we also have a table of shaders we can group a surface and a shader into something for which I haven’t found a cool name yet(the naming part gives me the most trouble) but lets call them renderables for the time beeing.These can be rendered with a few lines of code as you sugested.The trouble I have is with the rendering of sufaces.Both the glDrawElements and glDrawArrays commands are useful so I can’t just put the indices in the surface object (struct/class/whatever) and have a rendersurface() call and a rendersurfacearray(first,count) call.It simply doesn’t feel right.Best thing would be to somehow detach the actual data from the indices used for rendering(as opengl does) but I’m not sure how to best implement that.
Any suggestions?

V-man, “render with single call to glDrawElements” means all the drawing is done through the same function obviously each time u change a state eg texture u will have to call glDrawElements again, it doesnt mean he only calls drawelements once in the whole loop.

>>terrain, player, particle, hd, skybox and all the rest are different classes, with their own vertex arrays and textures, and other data. Correct?<<

not really all objects are treated exactlly the same (no exceptions)

struct ShadedModel
{
int modelID;
int *shaderIDs;
Mesh *meshes;
void render()
}
// a model is a collection of meshes
// each mesh has a shader

each object gets a ShadedModel + this is unique to that object

eg
struct Skybox
{
ShadedModel sm;

}

struct Person
{
ShadedModel sm;

}

so to draw them go
person.render();
skybox.render();
(though i just mark these meshes to be drawn until later when all the meshes are flushed at once)

heres the rendering function
render()
{
is boundingbox on screen
if false return
loop through all the shaders adding info that this mesh wants to be drawn
}
u might think this is stupid (+ can be done much quicker) eg i know the text is onscreen so why do a bounding box test?
the reason, treat everything the same make no exceptions, very very important

btw particle systems are a bit tricky (but the basic principle is the same)

also be prepared (+ dont be afraid to) to rewrite your whole engine over + over again ( i must of written mine at least 15x in the last 5 years)
also keep things seperated eg u want shadows in your scene simple just include another file (the rest of the engine doesnt need to know that youve added shadows to the scene)

good luck

What I do is have generic objects, which have the usual attributes of size,position, colour, etc. + a list of textures and a current texture item (so you can flip through an object texture list, selected from a global texture list, or even cycle them for animation with the usual animation controls).

I also have attibutes which allow things like size, position, rotation, colour, etc etc to be changed automatically by simply setting one of these to be an increment, and then the attribute will be inc’d … so it’s easy to set an objeect into motion (of course, you then need some vars to say what to do when a limit is reached - e.g. to restrict motion, or colour range - and then you need var to define the ranges as well - but with this scheme it’s easy to set up e.g. a bouncing
rotating, colour changing, texture cycling cube!

Oh, yes, this generic object has a “render function”, which says how it is to be drawn!
So at the flip of a frame, you can flip the render function to make e.g. a cube become a sphere etc etc …

Once this was in place, I needed to a way to design “scenes” of objects. Some auto changing and some under “other” controls. each scene was tied to a section of a music performance.

So, I decided to design a “workbox”, into which objects to be rendered were placed. The engine code then simply cycled the objects in the workbox. Of course an object could appear more than once( coz it was only a pointer anyway!), so you could do a form of multipass (for objects). I also had “special” object which could be placed in the workbox which were “control” objects. So you could change things like, depth buffer on/off, lighting on/off etc etc at the appropriate point in the render cycle for the objects.

[The other inputs were from video tracking th motion of the performer on stage, with gesture recognition output from a trained neural net, parameters derived from switches, accelerometers on the performer and aslo various bits of realtime audio processing]

The performer on stage (all wireless!), was then able to control all the graphics as they were being rendered (e.g. the origin point of a stream of “smoke” from the particle engine - did I say it had this too? - and the volume could change the colour!). He also had control over the 3D position of the audio as well - yes, 3D, the audience sat inside a full 3D ambisonic soundfield as well. So he could literlasly “throw” a note into the audience and then control how it moved around.

All, real time - and ALL done using GLUT, not problems at all.

Mind you, the graphics were on a lowly high end SGI O2, and the audio processing (analysis and generation/modification) was done on an 8 processor 4 gigs mem Origin 2000. The performer played soprano sax!

Anyway, hope the ideas and notions of the system helped. At the end it was written as an easy to use API, and the scences for the music were then programmed up using the API, from a storyboard designed by the composer and a graphic designer.

I was given 3 weeks programming time to design write and implement all this!

I did work, I would change some things.

Oh yes, objects could have child objects too!
Children are the same as a parent object (so they could have childern too —recurse!). So they have behaviours. However, when child object is updated the parent object data is also used - so, doing a “flock” is easy peasy! Functions exist to re-parent and object, or even making a child a full blown parent object and therefore simply an object in the scene with no ties! (So, peeling off a child object from a flock can work easily!)

Rob.

Bob Fletcher,

Not sure if I understood you well…
Did you implement the kind of rendering pipeline that lets you render say 10 objects with same models (e.g. 10 houses) each having say 10 different textures without ~100 texture switches but ~10 texture switches ?

…That is you would go first through all objects rendering first pass (as many textures as possible), then switch textures to those of second pass and render second pass and so on…
(I ignore other state changes like blending, etc… as they’re quite insignificant comparing to binding texture)

Solution like this would be very nice although a bit messy to implement…

Hi,

To say how is architectured my engine :
the fundment is a hierarchy of classes (I will simplify a lot for this explanation) Each hierarchy had his functionnality wich are, at the bottom, generic, and at the top, highly specialized (ok, that the principle of inheritance!). At the bottom hierarchy is “Object” class wich is able to be readen, writen, integrated in the hierarchy of object wich “is” the Engine database. Each “object” is able to be moved, deleted, duplicated, etc… Based on this “object” class are based “Abstract Objects” and “Physical Object” classes. Abstract objects are Texture, Animation curvs, etc… and physical object are all what had a Matrix and is really integrated in the space of the scene. I have detached arrays and mesh. A vertex array is an abstract object, but a mesh point on an array object.

The engine contain one pipeline object, wich is composed of n array of object pointer : this array are the stages of the rendering pipeline. At rendering start, all the objects of the database are called : at this call, each object is adding his pointer in an array of the pipeline. After this step, the pipeline is calling all objects recorded in array. Arrays are named like the stage definition : VISIBILITY_STAGE, BINDING_STAGE, POST_RENDERING_STAGE, , RENDERING_STAGE, ect… Each object, according to his base classe has a virtual method that is called at each stage. During the visibility stage calling, an object can add his pointer in the RENDERING stage if he is in the view frustum. So each object, in relation with his function, can insert himself in the diferent stage of the pipeline. Some stage are processing objects taking care of her priority, wich permit to render in transparency inverse order. Each object can be inserted more than one time in each stages…

The advantage of a such structure is that the pipeline is really simple, and doesn’t know what are the objects he is processing : light ? Camera ? Texture ? Mesh ? Bsp tree ?.. All what is doing the engine is calling objects wich are polymorphic, and wich are taking care of the pipeline stage particularities (priority order for exemple).

Well… my engine is really more complicated and relationship of object with the pipeline are more optimised, but in all cases, this is usefull and easy to expand. Bi lateral references are a mechanism very usefull : when an object is created, he can live by insert himself in processing pipelines.

Regards,

Gaby

PS: I don’t know if my contribution is helping you… :-/

Just add … the “engine” allowed for a number of different pipelines … so in the main most of the work was done using the “workbox” notion, but you could also then switch to a “particle generator” pipeline,
which would render the particles in to the current scene, and a third method was to be able to render a hierachical structure of objects. You decide the order of pipelines (and which ones) …

Of course each object had a simple flag to say render or not, and update or not … so once the pipeline(s) were rendering the scene you could still switch objects on and off (of course, how you decided to switch on/off was up to you … for me it was an “artistic”
decision, for others it might be a test to see if the object bounding box was onscreen or not etc etc).

My app was quite well bounded, so the engine was not required to be totally generic. I always knew what was to be onscreen or not.

The hard part was designing the “object store”, writing access methods to the object
data.

All in C though (sorry C++ folks).

So, you had things like:
getObjectVisibility( id );
and
setObjectVisibility( id, status );

and used thus:

setObjectVisibility(id,!getObjectVisibility(id);

Of course, using different “id’s” means you can simply flip one object state based on another very easily!

Mickey!

Not really. What I mean is that I have a generic texture store, and then can assign a set of textures to objects that need texturing, and then can select textures from this list. Either as, “current texture”, or cycle through the list (changing at “n” frames, default == 1), and then either restart or bounce. It sort of looks like:

beginWork();
addObjectToWork( id1 );
addObjectToWork( id2 );


addObjectToWork( control1 ); /*e.g.depth off */
addObjectToWork( id1 );
endWork();

object id1 and object id2 though could have the same render function (i.e. a house),
but have different texture lists!

In my context I only needed to use quite simple objects … some 2D and some 3D. Of course movement was in 3D.

However, I see no reason why the scheme would not work by simply writing another render function which rendered a more complex model.
If this model needed more careful placing and asignment of textures, then some new code would be needed to control this (definitely!).

For my application, it worked fine, and allowed complex scenes and graphics to be done easily.

Another feature of each object was an “event schedule list” as well, so you could e.g. attach an event to an object to occur “n frames” later. All object manipulations were covered by an event code. However, you could schedule an event (or series of events!) on one object to control another. The generic scheduler also passed parameters as well, so you could e.g. fire off an event on one object which passed its own colour data to another object so it then was rendered in the same colour as the first object (and then extend this notion to all other object attributes). Of course, you could hide events on “invisible objects”! If (for some reason) we missed an event “time”, it was popped of the event stack for that object.

A special event type was the ability to call up external routines bound into the system to do something the engine and support environment didn’t provide! Although, this was not used much in the end as these routines generally made it into a revised spec (ye olde engine rewrite, again, again!)

Sorry for the long post. But I’m happy to share ideas which someone may find useful.

The system was not built for speed, only decent smooth animation. Not for games!

But, was it art??? Who knows.

Rob.