Quake3 shaders question

I got that shader specification from id software and now I want to get some of those shader functions working in my code.

My question is: There are a ton of things that can be custimized (everying from culling to vertex colors, to texture scaling, to much much more). This equates to a lot of opengl state changes. This is bad. I know now that I could sort my polygons so that all polygons with similar shaders would be rendered in order, but I’m not sure if this will be possible once I get bsp trees working. Either way we’re talking about some serious state changing. Whats the best way to do this? The best way I can think of is to have a big struct that holds all the shader info and then a bunch of if statements before rendering that is set by this structure. This seems slow since I’ll be doing it alot. Another idea I have is using display lists to hold these state changes, but there are so many I’m not sure it would be worth holding all these lists. What do you guys think?

This is the biggest problem in ALL 3d-engines.I think there is no other way than sorting the surfaces by shader\rendering-state.If the surfaces are sorted perfectly you´ll end up doing as many state-changes as there are shaders.Can it be better?I don´t think so ´cause there is no way arround setting the correct rendering states for every shader…
Regarding the display lists:
I don´t think it would make much sense…Nearly all shader scripts need something changed in a certain timeframe(usually some seconds) so a display list wouldn´t speed up things AND even if it speeds up things a little I think you´ll die reading your “code of 1001 display lists”


Ditto. For opaque polygons, you will only really be able to sort by shader (and the shader’s implicit or explicit sort parameter). For transparent polygons, you’ll have to sort by depth as well obviously.

Yes the transparent-poly thing is really a problem.In Q3 no real classic back to front bsp-traversal is used so you cannot get the polys drawing order from the bsp…but perhaps one should really use the good ´ol back to front traversal only for the transparent polys or is it not too slow to depth sort them without the help of the bsp(Dunno ´cause I use bsp´s since I code 3d-apps )???


Lightmaps??? I’ve got it figured out that the lightmaps are stored as 128x128x24bit textures in memory from aftershock code. He also has that each surface keeps the following info:

int lm_offset[2];
int lm_size[2];

But I couldn’t find what those are for. Do I need that stuff?

Then I saw the vertexes have light map coordinates. It would be nice if all I needed were the light map coords to map the lightmaps, but I’m not sure. Since you guys have been seem to know everything I thought I’d ask ya.


Don´t worry about
int lm_offset[2];
int lm_size[2];

You won´t need them to draw the lightmaps correctly.This info is necessary if you do the dynamic lighting stuff(If you shoot a rocket down a corridor…).And yes rendering the lightmaps is more than simple:
Just bind the lightmap the face tells you and use it´s lightmap texcoords…
One hint:Is it some typo or do you really think the lightmaps are 12812824?Tell me what cool gfx card you have…Mine can just display a color-res of 3 bits(+one aplha channel) …Nope serious:The lightmaps are 1281283!!!


I think he is refering to the total amount of bits used in each pixel of the lightmap. Standard 24 bit RGB giving 8 bits to each component. Sounds right to me.

You might want to check that again about the lightmaps XBCT. I have to agree with ribblem. The lightmaps as stored are 24 bit textures that are 128x128 texels.

Thanks XBCT that’s what I wanted to know.

When I said x24 I was refering to the 24 bit color depth. You’re thinking 3 because the data is read in by bytes the array that holds this data has to be 128x128x3 chars. I’ll be more clear next time.