Shader Model 3.0 in OpenGl

Originally posted by Relic:
Dude, if R2VB means render to vertex buffer that’s something different then texture fetches inside the vertex pipeline.
Yes, but can implement everything that VTF can, and is much faster.

Originally posted by execom_rt:
Except that ATI doesn’t support alpha-blend floating point textures in OpenGL, even with the Radeon X1800 (it does, but in software rendering) …
It has been implemented and should appear in a future driver.

Originally posted by Humus:
[quote]Originally posted by execom_rt:
Except that ATI doesn’t support alpha-blend floating point textures in OpenGL, even with the Radeon X1800 (it does, but in software rendering) …
It has been implemented and should appear in a future driver.
[/QUOTE]Humus you know ATI hardware pretty well…can you explain to me how works HDR + AA on ATI hardware? and the diference that it have to Nvidia HDR…and how Valve implemented HDR + AA in Nvidia
it seems that even in ATI cards to suport HDR + AA is required some pragramming…just like Nvidia…but in ATI hardware its easier…but needs an extra programming… or i am wrong and ANY game that have HDR implemented can use AA without any other work on the code?

thanks for the help ^^

You need to write code for it. It would be nearly impossible for the driver to override it, since you don’t do the HDR in the backbuffer but with a separate FP16 render target. The driver could make some educated guesses when to override a render target with a multisampled version, but that would be prone to failure.

Instead the app has to create a multisampled render target and do the resolve blit itself. This is very easy though, so it should not be a problem.
The difference between ATI and Nvidia here is that the X1K series can multisample FP16 surfaces while Nvidia can’t. This means that if you do HDR the regular way with FP16 render targets you get no AA on Nvidia. That it works in Valve’s implementation is because it’s kinda hackish. They don’t use FP16, but regular RGBA8, but append tonemapping in the end of the shaders AFAIK. This means they lose linearity with blending and other nasty stuff, but with some tweaking from the artists they can make it look quite good anyway.

Originally posted by Humus:
[b]You need to write code for it. It would be nearly impossible for the driver to override it, since you don’t do the HDR in the backbuffer but with a separate FP16 render target. The driver could make some educated guesses when to override a render target with a multisampled version, but that would be prone to failure.

Instead the app has to create a multisampled render target and do the resolve blit itself. This is very easy though, so it should not be a problem.
The difference between ATI and Nvidia here is that the X1K series can multisample FP16 surfaces while Nvidia can’t. This means that if you do HDR the regular way with FP16 render targets you get no AA on Nvidia. That it works in Valve’s implementation is because it’s kinda hackish. They don’t use FP16, but regular RGBA8, but append tonemapping in the end of the shaders AFAIK. This means they lose linearity with blending and other nasty stuff, but with some tweaking from the artists they can make it look quite good anyway.[/b]
thanks a lot to have take time to answer my newbie question, now its pretty clear the difference. It looks that Nvidia G71 will come with AA on FP16 surfaces too…it would be nice :smiley:

Its has been released a pacth to farcry that enables the use o AA + HDR in nvidia hardware…Farcry HDR always was RGBA8 or they have to change all the code to get AA + HDR to nvidia?

it seems that even they can achieve AA + HDR on Nvidia it will not have the same quality that using INT10 or FP16 HDR + AA on ATI

http://prohardver.hu/c.php?mod=20&id=996&p=6

in this interview Eric Demers says that 10B HDR can have the same precision and more speed than the FP16 that is used today

and in this interview seens that Adaptive AA can be used on old ATI architectures

“Some form of adaptive AA is going to be made available on previous architectures. Our fundamental AA hardware has always been very flexible and gives the ability to alter sampling locations and methods. In the X1k family, we’ve done more things to improve performance when doing Adaptive AA, so that the performance hit there will be much less than on previous architectures. However, the fundamental feature should be available to earlier products. I’m not sure on the timeline, as we’ve focused on X1k QA for this feature, up to now. Beta testing is ongoing, and users can use registry keys to enable the feature for or older products, for now.”

the question is if the old architecture will have enough power to sustain AAA…it seens to be more demanding on the hardware

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.