UnrealEngine 3

These games appear to do much more than this unreal3 demo, appart from the parallax mapping maybe.
Parallax bumpmapping, though, makes a big difference.

Not as much as projective textures (even through smoke), image space glows, depth of field, incredibly efficient terrain/vegetation LOD to achieve incredible panoramic views in realtime etc.etc. Only a programmer will really notice the parallax bump mapping, I suspect - it’s minor detail compared to the other dramatic techniques I’ve mentioned.
I get the impression that most people on these type of forums don’t actually see much contemporary graphics technology unless someone posts a grainy mpg.
Buy or download some modern games. Ask your mum to increase your pocket money, SirKnight.

No other game published yet even comes close to the quality and visual complexity of that Unreal 3 demo. I agree parallax mapping usually has minor visual impact but it is easy to implement with dependant reads for all the initial coordinate generation, it’s no biggie but it’s nice to have. Unreal 3 and Doom 3 seem to be the only engines that truly do accumulated lighting with shaders and shadows correctly. You’ve got to take into account the database as well as the rendering but even looking at the rendering alone the quality of rendering & lighting in Unreal 3 wins IMHO (not comparing with Doom 3 here). I just played Far Cry and it was awesome, great foilage & LOD management over long distances and a few nice point features like water fresnel reflection/refraction transition but nothing as profound as unified lighting.

No other implementors really seem to have bought so completely into the unified lighting theme, they just don’t get it, yet. It’s not enough to say “we have bump mapping” or “we have parallax mapping” or “we have soft shadows”, it’s not about point features, it’s about rendering them seamlessly and IMHO this has huge implications for the ease with which you can generate the content. Unreal 3 and Doom 3 clearly do this.

One interesting difference I thought was the shaders in Unreal 3 vs Doom 3, it seems that the iridescence shaders etc Unreal 3 indicated a departure from the monolithic lighting equation approach of Doom 3, that’s an interesting difference and it seemed to work well in the context of the lit shadowed scene.

Well, Unreal 3 uses deferred shading. That makes it a lot easier to use a real shader-system, meaning, that each object can have it´s own shader (for example for proceureal textures), because rendering the objects and lighting the world is completely decoupled. This can add a lot of detail and athmosphere to a game, because everything can actually look exactly how the designer wants it to look like.

Doom 3 on the other hand has certainly a lot of problems to really support such a system, simply because the complete lighting equation always has to be in a shader. Because of the restriction how much input data you can use (and of course output again) this restricts it quite heavily.

However comparing Doom 3 and Unreal 3 would not be fair. Doom 3 is state of the art (at least in games) and is the best there is right now. Unreal 3 will be state of the art in a few years, but up to now it´s more like a future tech demo.

I don´t really wonder what John Carmack will have on screen, 3 months after Doom 3 has shipped…

Jan.

Only a programmer will really notice the parallax bump mapping, I suspect - it’s minor detail compared to the other dramatic techniques I’ve mentioned.
You’re kidding, right? For any significant bumps, the difference between parallax and no-parallax is like the difference between no bump mapping and bump mapping. The detail actually starts to come off the surface with parallax.

And you definately don’t have to be a programmer to see this.

Buy or download some modern games. Ask your mum to increase your pocket money, SirKnight.
Such as? Most modern games don’t look too great.

Also, most modern PC games tend to be pretty crappy to me. I refuse to buy crappy games, so I’m certainly not going to reward some dev studio just because they made interesting-looking crap.

One interesting difference I thought was the shaders in Unreal 3 vs Doom 3, it seems that the iridescence shaders etc Unreal 3 indicated a departure from the monolithic lighting equation approach of Doom 3, that’s an interesting difference and it seemed to work well in the context of the lit shadowed scene.
The idea that a single, monolithic shader should be used for all materials is definately a poor choice for ID. While certainly the real world works that way, shader tech and performance isn’t ready to build the single “world-shader”. Indeed, even non-interactive rendering doesn’t use a monolithic “world-shader”. Such systems typically build shaders as needed for the particular circumstance.

Doom 3 is state of the art (at least in games)
State of the Art? Maybe for last year, but HL2 takes it to a new level.

And, while I recognize that this is a graphics forum, it is important to point out that HL2’s engine does more than make pretty pictures. The physics/breaking system alone is capable of creating many gameplay opportunity that other games just won’t have.

No accounting for taste :-), I don’t think Half Life 2 rendering is up to the standard of Doom 3 or Unreal 3 but I’ll need to wait for the game to come out to be sure. In some respects it’s a matter of taste, it looks like a different approach from the others. As for gameplay yes it has some interesting features but that doesn’t mean the other games won’t, time will tell.

As for deferred shading, it’s not clear that Unreal 3 uses deferred shading, in itself it doesn’t imbue the scene with any additional quality, it’s just a rendering technique. Moreover I can see how deferred shading could cause problems with multiple light sources with independent shadowing for each light. Deferred shading would cause way more problems and not necessarily solve any. It’s one thing to make a claim about deferred shading helping quality but quite another say how or why. I’m just not buying that. I can see how it is a requirement for something like refraction and other related effects but that’s a given and commonly used today that’s not generally what people mean by classic deferred shading. It’s really only an ‘optimization’ that stores lighting and shading parameters and then does screen space lighting once per pixel once visibility has been determined.

Don’t believe the hype, Korval. If you know anything about programming you’ll realise that most things said about HL2 are hype, in other words exaggerated.
“physics/breaking system”!
I’ve played with the beta - it’s a bollocks marketing blag. Pre-defined breaking points, that’s basically a script for how an object breaks apart, if the designer could be bothered writing one…
Every single game I have played in the last 6 months has a realistic rigid & soft body physics ‘engine’. It’s no big deal, it adds something to the gameplay (I refer you to Far Cry, in which it is used to the best effect).
If you can’t justify buying the games, then just download them and call it research. You’re missing out on some very cool stuff being done using some really professional assets - far more impressive than these chipset vendor demo’s.
Oh, and get an xbox.

Sorry, i thought he was saying, that he uses deferred shading, but he doesn´t. However all he says makes it quite certain, that they do. For example he says, that EVERY pixel is per-pixel lit. If he wasn´t using deferred shading, he would certainly use a light-lod, meaning simpler lighting for far-away objects.

Anyway, hl2 looks nice, but in terms of RENDERING it is very hard to compare it to Doom 3. Doom 3 definetly uses the more advanced technology. However the Doom engine and the half-life engine target for completely different types of games. Half-Life is supposed to show much bigger levels which a Doom-like engine would not be suitable for.
Therefore HL2 still uses “old-fashioned” methods, such as lightmapping, but i don´t think this is bad, it´s just completely different.

And, yes, Valve makes a LOT of hype. Just yesterday i read their marketing paper about their engine (for people interessted in licensing it). First i was surprised, why they talk in 2 lines about some interessting technology, but use 6 lines to describe their simple modelviewer?? For a mod-maker this might be interessting, but for someone who wants to license it? Actually the information given was very imprecise and there was not very much at all.

And additionally they claimed to have “realtime dynamic radiosity lighting” !!! WTF ?
Certainly they meant “realtime framerates, static lights through radiosity and dynamic lighting as everybody else”.
Because in the next sentence they described, that they can split level-calculations over a network onto several computers to “dramatically” increase level-precalculations.

I really look forward to hl2, but i would like Valve to simply stop all that blabbering and actually get the thing done till 2005.

Jan.

Regarding Unreal 3 shadows, they use stencil shadows combined with a screen space filter to make them soft. Depth buffer or vertex programs can be used to control the blurriness over the distance. The technique is pretty well known now (many of us come with the same idea independently) and it is described even thru these threads. Regarding image based shadows, probably you are referring to cubemap masks that can be applied ?around? pixel light sources. Nothing new again. Same with parallax bump mapping, known technique.

Yep pretty obvious they’re using cubemap projections for sources and there’s nothing stopping you combining that with stencil shadows or additional textures. As for depth map convolved stencil results that’s not a given. From the eye it’s more like a depth of field effect. The depth map convolution required to do soft stencil shadows is depth from the light, not from the eye and would add a lot of complexity. The stencil test is a final pixel operation and therefore not accessible in the fragment shader so multiple taps for example are out of the question. You therefore cannot convolve stencil results until you’ve tested them to a buffer at least until until we get programmable final pixel operations (i.e. programmable blendfunc, stencil, zbuffer, hardware) and it has to be convolved before the modulation of the shader. This would require the use of several separate output buffers for intermediate results (eye depth and stencil results and possibly even convolved stencil results and/or shader results). Some related ideas have been posted, mainly by tooltech but they’re not exactly the same as the ones you infer from your description, it’s mainly a convolution of multiple stencil tests although the stencil penumbra volumes are cleverly generated.

OTOH with NVIDIA’s flow control instruction support you can do a lot that you couldn’t before and could even perform a stencil like shadow test in a fragment shader.

If they are doing light depth convolved shadow tests that would be an impressive effect.

P.S. it’s not actually depth from the light (which is what projective image based approaches get you), it also needs to account for the delta depth between the occluder and the shaded surface.

If he wasn´t using deferred shading, he would certainly use a light-lod, meaning simpler lighting for far-away objects.

Why? With early Z tests, you shade each pixel on the screen exactly once (times the number of passes needed to fit all your lights into the shaders). Thus, you don’t necessarily use any kind of LOD. If you’re a little careful about your lights (i e, use a non-physical, limited range for example) then you shouldn’t have any problems on modern hardware.

account for the delta depth between the occluder and the shaded surface

I believe this is much more important than depth from the light. If you’re using shadow maps, it’s very easy to get the depth-from-occluder, and you can use that as an offset into a screen-space texture to determine a filtering LOD, to figure out how bright to make the light contribution of the pixel, for example. Or just use a kernel on the depth map, making the kernel wider (or texture blurrier) the further away you are.

You can conceivably get depth-from-occluder using stencil as well, by passing in coordinates as an interpoland when rendering stencil volumes, and writing those coordinates to the frame buffer, although overlapping volumes would need special care and attention (such as a “min” or “conditional” blend mode, perhaps?).

Originally posted by licu:
Regarding Unreal 3 shadows, they use stencil shadows combined with a screen space filter to make them soft. Depth buffer or vertex programs can be used to control the blurriness over the distance. The technique is pretty well known now (many of us come with the same idea independently) and it is described even thru these threads.
Do you have any references? Preferably a paper.

I think licu is talking about the “smoothies” technique.

You can find a paper here :

http://graphics.csail.mit.edu/~ericchan/papers/smoothie/

AFAIK, on a 9800, this technique is “eating” too much power, but surely it will be interesting for next generation of hardware.

Cheers
Arath

Hi dorbie –

No other game published yet even comes close to the quality and visual complexity of that Unreal 3 demo…

Unreal 3 and Doom 3 seem to be the only engines that truly do accumulated lighting with shaders and shadows correctly. You’ve got to take into account the database as well as the rendering but even looking at the rendering alone the quality of rendering & lighting in Unreal 3 wins IMHO (not comparing with Doom 3 here)…

No other implementors really seem to have bought so completely into the unified lighting theme, they just don’t get it, yet. It’s not enough to say “we have bump mapping” or “we have parallax mapping” or “we have soft shadows”, it’s not about point features, it’s about rendering them seamlessly and IMHO this has huge implications for the ease with which you can generate the content. Unreal 3 and Doom 3 clearly do this.

The C4 Engine has had a unified lighting model that does everything “the right way” for about two years now. Check out the demo (lacking decent art) at:

http://www.terathon.com/c4engine/downloads/

– Eric Lengyel

Great stuff, OTOH, no offence Eric but that ain’t a title. It’s barely a passable demo. I’m sure the tech is great but it has to have content, even the content it has doesn’t show the tech features I’d expect to convince me visually of the claims. Multiple moving light sources with overlapping shadows, emissive material properties and specular with gloss maps and of course the obligatory bumpmapped reconstruction of geometry on simplified meshes with plenty of detail in the scene and on skinned characters and maybe with & without ambient in some shots. Not all essential but mostly what you’d need at the very least to show it off.

There have been plenty of demos that do the right thing in a simplified setting, for example Humus’ stuff. Most of us have written them. A shipping title is a different thing, it takes a while to build the content and game.

Carmack gave the first Doom3 demo years ago, they’ve taken this long to optimize and produce a game.

I’m not saying this is going to be surprising or novel when it arrives, but I’m just not buying the line that it’s yesterdays technology when nobody has delivered it effectively in a title yet and I haven’t seen anything better. I’ll admit though I may be missing one, and of course Futuremark did implement something pretty albeit with limited scope & few optimizations.

Part of this is having the courage of your convictions too, often the tech doesn’t drive the design at companies and the art team tries to solve yesterdays problems or has control to the point where they can veto a rendering technology. Some companies are incapable of driving technology for this and related reasons.

it takes a while to build the content and game

I think I nominate this as Understatement Of The Year.

You cab BUY an engine…

Some more pearls of wisdom:

Some engines are better than others.

Good engines have great content paths.

Some companies would rather fail than buy an engine :-).

Some engines aren’t.

Eric, that demo is appauling. Your application architecture is fantastic, but your demo really lets it down. All that programming, all those features aimed at giving realism to your graphics - all completely overlooked when playing that demo. YOU NEED TO CREATE A BETTER DEMO. It’s neccessary, in these shallow times.

I’d say the very fact that they showed multiple shaders and said they had a shader editor and everything shows they are not using deferred lighting. With deferred lighting you need to apply the same shader to the entire screen.

As for the shadows it actually suprised me when he said it was stencil based shadows. Links here for two such approachs:
Ysaneya's shadows
Penumbra Wedges

As for the smoothie shadows, this is what I thought they were doing after I read a interview saying they were doing soft shadows. But that cant be the case now since he says its stencil shadows. What do you guys think of smoothies?

smoothie paper

I was thinking of implementing this myself. I would think it would be faster than the two other methods. Only disadvantage is the umbra doesnt shrink. You’d have to use a cube of them to do omni lights and it doesnt do alpha textures like normal shadow buffers but it should do all of what the others do. I’ve been trying to think if there were some way to fix the umbra shrinking problem. Havent come up with anything.

Originally posted by knackered:
Eric, that demo is appauling. Your application architecture is fantastic, but your demo really lets it down. All that programming, all those features aimed at giving realism to your graphics - all completely overlooked when playing that demo. YOU NEED TO CREATE A BETTER DEMO. It’s neccessary, in these shallow times.
Best I could do with ABSOLUTELY NO ARTISTS. It’s meant to demonstrate technology to those who are capable of understanding it. By no means do I claim it to be a polished title. I’m working on showing off the tech better…