we know in games like quake3, all primitives are sorted by shaders and rendered together. I learn the idea of sorting by shader, and have written some rendering code. it is very efficient but the render state setting part is quite static. when firing rockets in quake3, there is a dynamic lighting effect done by a moving lightmap. I wonder how to manages those shaders in condition like that. if I insert another texture stage for the dynamic lm to the shader, all primitive using that shader will use one more stage. but those primitives that are not lit don’t even get texture coordinate generated for them. i also try to do project texture and underwater caustics lighting effect, it is about the same problem as managing dynamic lightmap. can anyone give me some hint about how to do that?
dont know how quake3 did it, but in quake2 the lighting effects afaik are really rendered into the lightmap.
ie with glCopyTexImage or SubImage dont remember. so that only a single lightmap is used.
he finds out the pixels that are affected and changes them (to yellow for the rocket e.g)
check the qbism project afaik they have full compatibility to quake3 and its open source as well.
qbism says “Q3 bsp format multiplies the dynamic light value rather than adds it”, which is a hacky faster way. if so, i think it is using src_clr+dest_clr*src_clr. that is not quite corrent, but can be done in another pass or another tex stage over the lightmap.
but i also remember once there was somebody from id talked about lighting in q3, and it is said in q3, texture color is multiplied with the add up sum of a series of lightmap, including dynamic ones. That is the correct way, still i am wondering how to implement it…
the adding of lightmaps is done manually I think, and then with glCopyTexImage sent to renderer
so that only a single lightmap is used when rendering, which is multiplied.
pure speculation so, but I think quake2 did it that way (check its source)
That would require a system memory copy of all lightmaps, either that or each affected lightmap would be read back from gl, processed, then uploaded again. I’m quite doubtful that this would have been used in q3 - q2 maybe, because it was based on a software renderer, but q3 required 100% hardware rasterization…so therefore would have to concern itself with bus bandwidth usage.
I have to agree with knackered. That sounds damned complicated and inefficient.
I don´t know, but i think they are doing multipass anyway, so they can simply do a lighting pass, then add all the dynamic lights to that, and than multiply it with the actual texture.
That wouldn´t require any texture modification and uploading and is powerful enough to support unlimited dynamic lightsources. Also on many cards this is pretty fast.