Advances In Realtime Graphics.

Ive just read this article:
http://www.acmqueue.com/modules.php?name=Content&pa=showpage&pid=139&page=1

which talks about the near future of real time graphics. It goes on about realtime photrealism. Personally, I totally disagree with the statement that we are close. Personally I think we arent even close to approaching photo-realistic graphics in realtime. The first step in the journey to this goal is hardware global illumination. How close is that to being a reality. And even then, take the rendered graphics in LoTR. Now, how much processing power did it take to render even one frame of say gollum? Alot. Now also consider the fact that although gollum looked pretty impressive, you could still tell that it was obviously a rendered character, at no point was anyone id think fooled into thinking he was real.
What are people’s thoughts on this whole issue? Im pretty interested in what you guys think.

I bring this up because there seems to be all of this hype with things like Doom3 and the Unreal3 video, I’d like people to have a reality check and realise that real time graphics at the moment are not THAT good :slight_smile:

I agree wholeheartedly. I read a lot of hot air from various people around various forums about how close we are to true photo-realism, and how it’ll be here in games in 2-3 years or so.

I’ve come to the conclusion that these people must be computer dweebs to live their lives in windowless rooms with the lights switched off. Under no circumstances to they venture outdoors. If they did (or even just switched the lights on in their rooms) they might well get the “Ahh. Errr… maybe we’re not” moment.

The problem is how to define photorealistic. If you define it as “you don’t see a difference between a rendered image and reality”, then a digital photo on a computer screen is NOT photorealistic (except perhaps those new high-res screens where you have to look twice to convince yourself you are not looking out of the window ;-).

I am sure a few years back everyone would have agreed that the quality of e.g. UT2003 is photorealistic and that it is not neccesary to do much better. But now that we have this quality, everyone thinks “just a bit more, then we got it photorealistic”, and when we manage this, we have still better offline rendered graphics and again everyone says “just a bit more…”, and so on.

Just remember how excited everyone was when nVidia announced their CineFX engine that could render graphics like the Final Fantasy movie to be rendered in real time, and now that we have it, its no big deal…

Ok, by photorealistic id day you couldnt tell the difference between a movie of a relatively complex real world scene (not just a box in an emtpy room) and a realtime rendering of the same scene.
Or maybe something like: www.fakeorphoto.com

Imagine something like the Turing test for graphics. You sit a person in front of a monitor and show them an animated scene. Then they have to tell you whether they think it was rendered or a movie of the real world. Photorealism is achieved basically when a person cannot reliably tell.

Heh, maybe realtime photorealism will be like strong AI or fusion, where people are always saying that itll be here in 10 years… 20 years later they are still saying the same thing.

Originally posted by MrShoe:
…Heh, maybe realtime photorealism will be like strong AI or fusion, where people are always saying that itll be here in 10 years… 20 years later they are still saying the same thing.
I agree. By the way, maybe it’s also a kind of subjective opinion. For example, I can see jaggies in every rendered image (or small defects) at first glance since I have knowledge on what to search for and how… and I have good eyes.

I agree that the technology is not yet able to render realtime full photorealism. Although there are a number of techniques that can be implemented independently realtime like global illumination, radiosity, photon mapping and subsurface scattering. But using all these at the same time in real time won’t be for the near future. Especially if you want to introduce some BRDF functions, etc. too.

The reason that we notice if something is real or not, is that renderer objects are perfectly symmetric, have perfect color shading,… and as we know, in real life nothing is perfect. The rendered image can be smade more realistic by adding some noise to the image.
Just take a perfectly rendered scene, add an old film effect to it, and it will seem more real.

Also, referring to your remark about gollem. It’s mostly our own human fault that we see it as a rendered object instead of a lifelike one. Subconsciously we know that it’s impossible that gollem is a real creature. But take a look at the elephants in LOTR, I bet there aren’t many people who said “Hey take a look at that elephant, it’s fake”. That’s because we know it’s possible that the elephant is real.

There’s also typically a problem with rendering humans or humanlike creatures, because we’re simply more sensitive to deviations from reality when it comes to human faces then, let’s say, animals. Even a small distortion in face color will trigger the brain, saying that there’s something wrong. In comparison, we wouldn’t notice a small change in color with animals at all.

N.

Originally posted by -NiCo-:
Also, referring to your remark about gollem. It’s mostly our own human fault that we see it as a rendered object instead of a lifelike one. Subconsciously we know that it’s impossible that gollem is a real creature.

I agree. If there were such a thing as a golem, I would have started heading for the hills when he first appeared on the screen. That was quite possibly the coolest thing that I’ve ever seen. I’d say holywood has it nearly wrapped up. Our poor little pc’s have some catching up to do.

I have to pause and be thankful for how far it’s come. I remember playing Quake 1 in 320 x 200 and 8 bit color, and thinking to myself, “Oh my God, how’d they do that? This is the coolest thing I’ve ever seen!”. That was only 10 years ago. Today, I get into a twist with anything less than 1024x768x32 and all the bells and whistles. I suspect 10 years from now we will be closing on on some truly fantastic stuff, if not “photorealism”. We will look back on UT2004 and Doom3 with fond memories, but will be unable to stomach the graphics. I try not to get so mired in the graphics that I forget to use my imagination once and a while - now that’s photorealistic. :slight_smile:

yes, I’ve seen that page, fakeorphoto.

I had 9 out of 10 right, and my mistake was that I thought one of the real images was fake because of the weird colors. Probably they did color processing on it.

There is a big difference between CGI and reality, even with those special software renderers.

This thread seems to be an extension of this one, where one person is convinced in 2 years “we’ll have it all”. Gimme a break!

http://www.opengl.org/discussion_boards/cgi_directory/ultimatebb.cgi?ubb=get_topic;f=3;t=011479

In my view folks who think we’ll have it all in 2 years time aren’t setting their quality standards high enough.

What is “photorealistic”?

Is it “indistinguishable from reality”? If so, no, not that.

Is it “indistinguishable from a photo”? If so, probably someday.

Is it “indistringuishable from TV”? If so, we’re pretty close, at TV resolutions, with the next generation of engines. TV is very carefully lit and staged, just like games.

I locked this thread originally because I thought it was OT and discussed to death in other threads with an inevitable lack of outcome, but I’m unlocking it by request. It’s your (the contributers) forum …

I believe you should only consider a thread for locking when, and only when, a contributor requests it. But then again, I live in a democracy, so I’m biased.

Originally posted by dorbie:
I locked this thread originally because I thought it was OT and discussed to death in other threads with an inevitable lack of outcome, but I’m unlocking it by request. It’s your (the contributers) forum …
:smiley: I’m glad to see you still have a sense of humor about this kind of thing. I’ll bet you have heard this story ad nausium.

I for one think the moderators do a great job of keeping these forums clear of impertinent tripe. I’ll bet if were left to contributors to weed this stuff out, it would grow like the fungus under a log.

I guess some folks thought this thread was just too much fun. I contributed to it, so I can hardly disagree. :slight_smile:

Originally posted by dorbie:
I locked this thread originally because I thought it was OT and discussed to death in other threads with an inevitable lack of outcome, but I’m unlocking it by request. It’s your (the contributers) forum …
Thanks, Angus. I tried to comment during the lock period, but now I’ve forgotten what I was going to say… :wink:

Oh yeah. Something following on JWatte’s point. I agree it’s definitely important to know if your goal is to make the most perfect synthetic images or rather to make people forget they’re looking at synthetic images.

I’ve seen VR systems with 1994 graphics (high end) that seemed more “real” to me and others than some of the best rendered games today. And that’s probably because it’s ultimately an issue of perception, not simulation. Good display hardware, good immersion, good external cues (even wind), etc,… can mean more than the best graphics.

(yes, this is a graphics forum, but bear with me for a moment…)

And that’s largely because we don’t see the world in a photorealistic way. We see a stream of subtle cues and rebuild the world in our heads. Simply making the computer screen a perfect window into a virtual world is, in a way, a cop out. I mean, if I held a perfect holographic copy of this room in my hand, I’d still be holding it in my hand…

That’s not to say everyone needs a CAVE in their living room (they do, but that’s another discussion). Even on a typical monitor, games often undermine the own hard work in the visual perception area. While a 110 degree FOV might look “cool” and even add value from a gameplay perspective, you might want to try computing the apex of that virtual frustum in real space and see if it’s comfortable or realistic to hold your head 8" from the monitor. And don’t even get me started on lens flare… :wink:

Anyway, my point is that it should be entirely possible to pull people into a virtual world, even on a small monitor, even in 3rd person, if we focus on the ways we can get past their eyes and talk straight to their brains.

Originally posted by -NiCo-:
Although there are a number of techniques that can be implemented independently realtime like global illumination, radiosity, photon mapping…
I’m not aware any of those things can be done in realtime on a standard pc for anything other than extremely simple scenes or at very low quality. (Unless you are including SH as radiosity, which I don’t)

Originally posted by Cyranose:
it’s ultimately an issue of perception, not simulation. Good display hardware, good immersion…
Yes and it’s annoying how we are still waiting for good,immersive display hardware. In 2001 I worked on a VR project, the best we could do without paying a fortune for stereoscopic display hardware was a 30 degree FOV. In 2004 it looks like little has changed, the company we used before is still only offering 30 degree FOV. As someone else said to me, it felt like you were sitting at the back of a cinema.

From a programming point of view it’s dead easy to add stereoscopic and head tracking support, we just need display hardware with good FOV at reasonable cost and we can take gaming to the next level, in terms of immersion.

Originally posted by Adrian:
From a programming point of view it’s dead easy to add stereoscopic and head tracking support, we just need display hardware with good FOV at reasonable cost and we can take gaming to the next level, in terms of immersion.
I don’t have any studies handy to cite, though I vaguely recall reading one a few years back. However, my direct experience is that large hi-res screens are significantly more meaningful for conveying realism than stereo and head-tracking latency is never quite good enough. The good news is that large hi-res screens are becoming affordable and 3D hardware is up to the challenge in terms of pixel densities.

Originally posted by Cyranose:

large hi-res screens are significantly more meaningful for conveying realism than stereo and head-tracking latency is never quite good enough.

But with screens if I look at the floor/ceiling I see my floor not the virtual worlds floor/ceiling. And unless you have a wrap around screen there is the same problem if I look behind me. Also Screens (natively) do not give a true feeling of depth. Ignoring the current low FOV of hmds I can’t see how screens are more immersive. Large screens are also not practical for many households, that is an issue that won’t go away. The weight/size issues with HMDs will as technology improves.

I never had a big problem with head tracking latency, it didn’t spoil the immersion, the low FOV did.