Programmability...the double edged sword !!

good point! I just hope things dont get to be too real looking. Then it would kill that cartoony look I got used to in games.

V-man

Ahh the old ray tracing myth.

Ray tracing resolves surface visibility and reflection/refraction visibility. Most interest is in improved shading and lighting, for this ray tracing doesn’t bring much to the party.

Yup, ray tracing is way overrated IMO. Besides, we’re memory bandwidth limited now and for the forseeable future. I don’t even want to think about the cache thrashing that goes on inside a ray tracer. PixelPlanes 5 stored the entire scene at each pixel to address this, IIRC. Ouch.

Pixel planes 5 had very little memory per ALU. The implementation I’m familiar with did not raytrace, although it was just a programmable SIMD array of ALUs with a little memory each so it might have been programmed another way in different implementations. In most implementations it evaluated 3 edge equations per triangle to see if it got a hit then calculated Z (more edges for polys). All of the triangles were transformed to screen space, although this was done on a separate geometry engine, the pixel planes 5 I know just did zbuffer visibility and shading. The geometry transform also needed to bin the primitives to chip sized screen regions for efficient processing. Ultimately after binning each ALU array had to be sent the data for every primitive in it’s region and each would first determine which was visible (2 tests, the edge equations and the z buffer). It would use the limited available memory to parameterize it’s triangle info, like normal color texture coord and light position and texture ID. Without texture it would then do a lighting calculation, with texture every texel in every texture would be sent to the SIMD array and each ALU would pull out the address from the stream it needed based on texture ID and coordinates, texture on PP5 was really a hack to do something it was never designed to do.

[This message has been edited by dorbie (edited 05-09-2002).]

Originally posted by deshfrudu:
Yup, ray tracing is way overrated IMO.

first: it solves all our problems (if we can trace enough rays, logically )
second: it is damn easy

try doing shadows on todays gpu and you’ll realice that its a much too complicated task somehow…

try to get reflections and you have the same problem…

and as we get more and more of these effects we get several renderings from several viewports per frame, wich is technically about the same as raytracing…

raytracing is just so much easier to organise, cause its a general solution.

about realism. sure, simple raytracing does not look much more accurate than current scenes, lightingmodels don’t have to change in raytracing. all we get is accurate shadows and reflections and refractions, wich we currently only get for planes.

but, about realism. most of you know montecarlo method, and we all know its slow. but its only slow because it takes tons of rays wich are about the same (and in reality would be done during the same time in parallel). its just perfect for a streaming processing unit in the design of future gpu’s… if we push raytracing then we are in some years really soon at first realtime montecarlo techniques, and you can’t say montecarlo images don’t look cool.

and about all the hardcore-math-fans who want to be cool: get the metropolis-raytracing method onto a streaming architecture, and we have full global illumination problem solved for realtime apps in about 2 or 3 years!

then finally no one talks about perpixellighting and all that stuff but instead we can concentrate on good quality images, means the graficer has to design good worlds giving cool feelings, and we finally can move back to programming good games wich make FUN…

then the good games come back. finally…

No it doesn’t solve all our problems.

Solving the same problem with completely different methods does not mean you are using the same method, it means the opposite.

If you examine the problem of shading a surface for illumination ray tracing is no help at all, unless you’re talking about an unholy number of rays. You talk about ray tracing as if it magically calculates lighting. Once again, ray tracing does not do your lighting calculation. Just because this is not the dominant feature of a ray tracing algorithm, it still has to be done once you have your fragment and surface information.

It’s at best unfair to say that somehow your fragment lighting problem goes away simply because you ray trace.

[This message has been edited by dorbie (edited 05-10-2002).]

Although I haven’t done any raytracing I doubt it will be what everybody is doing in a few years. If I’m not mistaken, the complexity of ray tracing is far from (sub)linear in the number of rays. It would require a massively parallel processer to run in real-time. I’m sure hacks are possible but real raytracing in real-time is not likely to happen any time “soon”.

Most of the smarts in ray tracers goes into speeding the traversal by minimizing the redundant ray-surface tests, usually there’s some other structure which the rays traverse to quickly determine which surfaces must be tested.

Originally posted by dorbie:
[b]No it doesn’t solve all our problems.

Solving the same problem with completely different methods does not mean you are using the same method, it means the opposite.[/b]

well… actually solving a problem that could only be approximated is… yeah… solving the problem. what else?

You talk about ray tracing as if it magically calculates lighting. Once again, ray tracing does not do your lighting calculation.
It’s at best unfair to say that somehow your fragment lighting problem goes away simply because you ray trace.

grmbl, lost my text due some copypaste… okay once again.

lighting equation is independend on rendering method.
this is true, no point in this. but on the other side one rendering method can make it more easy to solve complex lighting equations than others…
for doing correct lighting we need a lot of samples per surfaceelement. and in my eyes there a random ray generator is the best approach… another approach is rendering a hemisphere for every pixel… but that will take a VERY LONG TIME for having this realtime in a rastericer… i think raytracing is faster there (except you can connect all the rastericers to one huge renderfarm okay…)

and the first step we want is 30x640x480 rays per second… i think this is possible on p10… next would be more than that, to implement simple raytracing algos… that will mean we get at first phong-lighting working, with real reflections and refractions. that is what most of the people think wich is raytracing… but we can push more and more rays with future hardware… say it doubles every 6 months as it does currently… then this means 60fps with one ray in the middle of 2k3, 120fps yet at the end of 2k3… on 320x240 this means yet 30fps with 32 rays per pixel. thats enough for a very good realisation of the simple phong stuff with interreflections and everything.

this can be yet enough to do simple demos with smooth shadows…

these 32rays perpixel will be 128 one year later, 512 in the end of 2k5 2kr end of 2k6, 8kr 2k7 32kr 2k8 or 512kr in 2k10 (2010).

then we move back to higher res (we can do this before as well…) and we get 32768 rays per pixel on 640x480 with 30fps… thats in 8 years… and that is ENOUGH for solving montecarlo…
and montecarlo is damn nice looking, you can’t say thats not true…

ok, this is when everything works good, true… but tell me how to solve montecarlo with a rastericer… perpixel, with bumpmaps and brdf’s and much more. tell me

i don’t want to hipe raytracing. i want to stop the hipe around rastericers… they simply plot triangles. what we see today is damn good plotting of triangles, but its yet quite difficult to make further steps. and all the preprocessing steps for accurate lighting stuff is done with… say what? RAYS! (take a look at the papers on dx wich are out since some days, or other things…).

and if i can choose between a complicated set up for a fake, or a simpler set up for a solution, i’ll choose the solution… and getting correct shadows is much simpler to do with raytracing (its simply the best example)

shadows = found_intersection_between(pos,light);

more easy than shadow volumes, more easy than setting up shadow-cubemaps.

and first time i can do with ease volumetric objects, like fog… (i would need some glBegin(GL_TETRAEDER) for this… glEnd())

and if you would read more about the people trieing to solve lighting problems, finding statistical approaches to solve all the stuff, building up huge camera-arrays to solve the equations not solvable by brain anymore, then you would know they only work with rays… putting this all into a rastericer is quite stupid… (universities i talk about, standford has lots of great papers…)

and raytracing is not more complex than rastericing because you need to know the whole scene… for having accurate refractions and reflections you simply rerender your scene several times, thats because you need this info as well for accurate solutions you would need to rerender your scene for every pixel… but you realise that there its stupid to use a rastericer

oh, and my view is not bound by a frustum with planes. i can see infinitely far, i can see with some viewangles, but not with a nearplane as well. and i don’t see lines as lines anymore, but projected as curves…

Your point about surface samples to do a proper BRDF (maybe even global illumination) is what I meant by an unholy number of rays, and it’s still not the only way.

You’re losing sight of the problem. The objective of changing to another system would be to improve performance for equivalent cost, not conduct a science experiment, I’ll leave that to the academics. It isn’t practical to brute force these problems for reasonably interractive situations, and the stuff you can do practically can be done better using other methods, particularly for unbounded problems.

If you want to monte carlo incident global illumination on every fragment in hardware go ahead, but I think that’s trying to contrive a problem ray tracing can handle (slowly) instead of making the most of the available hardware. You’ve gone from the main objective of ray tracing and it’s most obvious strength (shadows and reflections), to the ostentatious treatment of fragment shading.

I look forward to seeing your results.

P.S. if your display surface is flat, your lines should be straight after projection, anything else is wrong.

Originally posted by davepermen:
…but we can push more and more rays with future hardware… say it doubles every 6 months as it does currently…

That’s exactly what won’t happen unless you’re dealing with a linear algorithm ( especially using brute force like dorbie mentioned ). I did a bit of searching and I found a lower bound of O(log n) for ray shooting but with a space complexity of O(n^4). Like I said previously, I don’t know much about raytracing but time/space complexity speaks for itself.

Maybe the future is more along the lines of multipass algorithms such as that suggested by Paul J. Diefenbach. He illustrates global illumination using a multipass techniques.

Originally posted by PH:
Although I haven’t done any raytracing I doubt it will be what everybody is doing in a few years. If I’m not mistaken, the complexity of ray tracing is far from (sub)linear in the number of rays. It would require a massively parallel processer to run in real-time. I’m sure hacks are possible but real raytracing in real-time is not likely to happen any time “soon”.

Soon enough ? ;]

I recommend the first paper as an introduction. There are no great “hacks” as far as I can tell, apart from using a static, preprocessed scene.

Now, I’m not qualified to judge whether it can ever become practical for heavily dynamic interactive applications (games), but it’s certainly working for some applications now, and on consumer hardware to boot.

well… why then doing evolution in hardware at all if you don’t want accurate simulation of the reality? there is no need. take a gf4, push tons of triangles and be happy… i am for 2 years now looking at ways to get accurate lighting in realtime for dynamic worlds. and the more i see the more i have to say brute force with brain will be the only way to solve general situations. as long as we can’t solve general situations, we can’t render arbitary scenes… and thats what i want, to have the power to have every kind of world for games…

well its not O(n^4) really…

its quite linear to the amount of rays. means if you have several samples per point and then for every of those samples again several samples per point, then yes, its growing damn fast… but remember we don’t need much samples there anymore really, just because the screen and our eye is not that accurate anyways…

well… i need several parallel processors doing tasks for me (3 or 4). and they work in linear time on a simple stream of data. current rastericers have 2 processors working on simple streams of data.

sure there is a big difference, but i think its worth it, cause we’re currently coming at the edge of rastericers. doing a next big step for more accurate lighting and materials is nearly impossible, because of the local structure of rastericers. i dunno how much we can fake anymore, but i dislike those fakes even now… (take a look at the water on gf3/4, it looks so faked… nice but faked anyways… (i mean the envbump reflections) at least they should have made a z-difference-dependent displacement-factor, but no… )

well no, the lines should not be straight if i capture my scene with a camera with lenses. and like that. no. if i take a cam, go out, film a straight line, then i see the curve on my tv at home.

“…instead of making the most of the availible hardware…” well. i can’t get quite much of my hardware anymore, its a gf2mx, and i pushed it to quite much work yet. but as i want to upgrade, i’m thinkin about how much i can get out of the hardware availible currently, and availible soon. and i know how much i could push out of a gf4, and this is simply not worthing it to buy… funny embross-features, bether lighting shemes, okay. more accurate realisations of my equations on gf2, okay. and its faster, yeah… but its not a real evolution somehow. and like that its not worthing that much money. i wait for the next gen, and they i push to the limits.

pushing to the limits means getting more out of it that it was thought to, not plotting tons of triangles… that is OLD. now we want GOOD triangles. ACCURATE triangles. STUNNING triangles.

and yeah, i would love to have a pc doing all the stuff bruteforce so i don’t need to solve/fake the stuff myself to get around its “slowiness”…

i look forward to seeing my results as well.

but there will be a time where we have to change our roots. the more early we step away from the current roots, the more easy the step will be… (its yet too hard to simply change, because rastericers are so much faster currently…)

Thanks for the link Maj. That’s very interesting. I’ve had a quick look and it doesn’t appear to be applicable to dynamic scenes ( though I have seen RTRT demos with simple moving objects ). I can’t imagine complex dynamic scenes in real-time being possible “soon” - I still think that there are better ways to achieve equivalent results that scale better with future hardware.

I’m certainly looking forward to seeing what can be done with RTRT ( keep us posted daveperman ).

i just suggest to remember one thing… rastericing is not old (rtrastericing i mean ) remember the time we had 468er and just could play quake1 on 320x240… and this with very stupid lighting (and cool lightmaps) and very low amount of everything…

and now? what do we have now? final fantasy the movie with about 12 to 25fps on a gf4…

raytracing is soon fast enough for quake1 on 320x240 on a pc… but it will start with perpixellighting, all shadowed and reflections/refractions… where will it be when we made the same steps we’ve done for now with rastericers?

and even this very simple first “big game” of rtrt will have features in wich are not possible with a gf4 at all… and not with future rastericer-algos as well…

so what?

raytracing hadn’t had the opportunity to be pushed by everyone to the extreme, just because of the initial power it needs for even a small scene…

at the moment it gets that support to be pushed to the extreme, it will grow as exponential as the rastericers did over this time…

and no one thought at the times of quake1 that what we have today will be possible that soon, did you?

That’s not really the issue, the unbounded complexity of the model, cache coherency is and overarching traversal structure (and associated processing) are. It’s an inherently retained architecture with considerable structural requirements like octrees or whatever. Insufficient distinction is being made between various graphics problems. You’re lumping them all together reguardless of performance and other existing solutions. Ray tracing is the brute force approach to graphics, that’s doubly true when you talk about monte carlo sampling for illumination, which even most hard core ray tracing advocates don’t consider practical.

When talking about ray tracing I think you should be clear on what problems you are trying to solve. Not all problems are the same, and because ray tracing can solve it doesn’t mean it’s the best way.

Just because ray tracing will get faster doesn’t mean other approaches won’t outpace it. Why should I raytrace when there are faster more practical approaches? Ray tracing for ray tracings sake is not a good policy.

Heck, many people accelerate ray tracing today by getting first hit info using hardware rasterization. With more complex programmable framebuffers and interpolators even more of that kind of thing will be possible. Imagine for example rasterizing various colors, world position normals etc in a conventional rendering pass, perhaps even shading and including a couple of destination alpha terms say for Fresnel reflection and/or refraction term. Then once you know the information on your first hit and have your primary lighting done quickly you could process the reflection or more complex illumination (with ray tracing if you insist).

I’m not saying do it one way, but be pragmatic about your method.

[This message has been edited by dorbie (edited 05-10-2002).]

Dave,
Well, Quake1 was very impressive. The difference is : improving the lower bound of an algorithm is a lot harder ( that’s science ) than improving a constant ( low-level assembly hacks ). As seen in the papers ( link from Maj ), distributed raytracing is key to real-time performance ( 2fps is interactive, though hardly real-time ).

P.S.

Another point overlooked is that a raytracing interface would probably be some sort of scene graph, unless you intend to implement your own optimized ray database traverser. So folks espousing this need to think carefully about how they would expect to program on such a system. How you give it your data and how you specify fragment shading. Would you expect to do your own math at the fragment and just request arbitrary recursive shading results from any old direction for example or let the hardware just handle the whole thing (clearly you wouldn’t be happy with that). The implications for stalling a pipeline seem horrendous.

funny to post questions you don’t care about, isn’t it?

about cache-problems. there are tons of cache-problems with simple rastericers and textures, but i hardly see problems on my gf2mx. why? cause the architecture itself is intelligent enough (and p10 will have MUCH more power in this issue, we’ll see if it will have, but according to the info yes)

there is not really a scene-graph needed, you “render the geometry into the world” and the raytracer captures the screenies from this.

but its all open. do what you want. bench to see wich is fastest and wich not.

i don’t do any static stuff, cause i hate it. never used glGenLists and glCallLists in my code. NEVER. even if i knew it would be faster, i never cared. so don’t think my raytracer will ever use any precompilation. others will. and they will be much faster logically. some will do fixedfunction shading, some will give full access to the programable shaders. there is no point in talkin about that, thats part of the implementation.

my approach is to code a fixed phong-shader with bumpmaps and arbitary number of currently pointlightsources (cubemaps are not that well for software-engines, so no directional-maps…)

quake1 was very impressive. but well. when i play it now, and take a look at the same time at halo on xbox for example… well… its crack

i don’t do assemblerhacks, i’m a too new guy in the whole environment, i am born with c++ so… i know asm but i’m not that familiar with it. i always code in c/c++ and all optimisations i do are algorithmical (and simply coding only fast code ). and well. my algorithm is only O(n*m) for now, where n are the amount of rays and m the amount of intersectiontests needed per ray. now n varies dynamically, and m is dependend on how you manage the scene, yes. if you have some tree then you don’t need to test that much