An off-topic note:
Off-topic for this thread, but not in my experience. Maybe it depends on your definition of “good”… compared with Quartz, or raytracing where you can shoot an arbitrary number of rays through every pixel, it totally sucks.
Sure, compared to raytracing which isn’t designed for real-time, yes. But, for a triangle rasterizer, ATi’s R300 anti-aliasing is the best around.
Now, back on-topic.
but a new design for GPUs, that would allow them to run at higher frequencies, might come soon. Ati just licensed from Intrinsity a technology that should allow them to quadruple the frequency at which the processing units run. The performance of a 4-pipe GPU running at 1GHz would be better that the performance of a 8-pipe GPU running at 500 MHz, due to lesser loading of the latter.
While it is true that fewer pipes at a higher clock speed is better, you have to realize that
Having the CPU read/write
directly to the graphics memory and the GPU to the main memory would cut a
lot of the driver and AGP latencies (which are becoming increasingly important,
read the BatchBatchBatch.pdf from GDC2003).
First, this .pdf is directed at D3D only. Secondly, the reason it is directed at D3D only is because the IHV-portion of D3D isn’t allowed to do marshaling of calls itself; the D3D runtime does it (and not in a particularly thoughtful way). This has nothing to do with bandwidth between the card and the CPU.
It would also mean that one could
write assembly code that runs directly on the GPU (bypassing the driver).
Neither OpenGL nor D3D are ever going to provide an API for that. Nor should they.
That’s why I said we need an OpenRT spec, something HW manufacturers could shoot for!
Why would they want to? Remember, even high-end movie FX houses don’t use RayTracing that frequently. So why should they provide that ability in consumer-level cards? If it’s good enough for Pixar, isn’t it good enough for everybody else?
The “graphics innovations are almost dead” crowd has been reiterating that we’re nearing the point where hardware is running out of interesting things to do, and CPU’s will eventually catch up. The fatal flaw in that argument is that because we’ve come such a long way, advances are starting to plateau, therefore innovation and interest in further advancement will cease. True, advances are seeming to plateau, but it’s not a plateau but an elbow in the curve. What it really means is that we’re just now getting to the hard part. The easy problems have been solved, and what’s left is the long, trudge ahead that step-by-step will get us ever closer to visual realism. But that path will likely not end in our lifetimes, if ever.
In general, my belief is that the only differences between various cards in 3-5 years will be performance. And that will be the only impetus to upgrade as well. Graphics cards will be feature-complete.
I haven’t tried to write out this list, but I’d say, with Doom 3, our image quality is 75% as good as the ultimate image quality we can imagine.
Lol!
We aren’t even 25% of the way there. What we have done is, as someone said, “picked the low-hanging fruits.” We’ve done the easy stuff. Texturing to add detail. Surface detail interacting with lights (bump mapping). Reasonably correct shadows. Now, comes all the really hard, but really important, stuff.
Take shadows for instance. Either shadow maps or shadow volumes, neither one provides for easy soft shadows. But soft shadows are vital for photorealism. Doing soft shadows is hard. We did the easy part: hard shadows. Now, comes the difficult, yet vital, part.
These kinds of subtle things are what separates “that’s pretty decent CGI” from “that’s CGI?!” Without these subtle interplays of light, you aren’t getting the job done.