3D Vision and GL 3.x +

Ah, that makes sense. I can imagine this solution breaking down hard on HUDs, postprocessing effects and the like. From various articles on the effect, it seems that Nvidia simply reverse-engineers the games’ shaders and plugs its own world matrices, wherever possible. Ugly hack that can never be robust.

What they could (and should) do, is provide a way to enable QBS. Realistically, this is possible even without modifying the D3D itself (e.g. link with nvision.dll, call nvEnableQBS() before creating the D3D device and nvSelectQBSBuffer(NV_BACK_LEFT/NV_BACK_RIGHT) to route rendering commands). Of course, this would eat into their Quadro sales, so it’s a no go. I could also imagine Microsoft wanting to have some say on the matter.

Nice! So are there two shaders for the same effect, one for regular, and one for stereo? That is one of my main concerns.
Sadly, my OpenGL class went to shader-based the month after mine, so I don’t have much experience with them. I’ll be auditing the class after I’m done with school, if not sooner.

Only one shader needed. Just tweak the projection slightly to the left then to the right.

Just tweak the projection slightly to the left then to the right.

And how do you know what the projection is? Or even if the shader is intended to render something to the viewport (rather than say, a render target for an in-game monitor)?

That’s the problem with denying explicit application QBS control; the shader hack can’t really know when to do the hack and when not to.

Okay, wait, for those not understanding here : since a few posts, there are 2 different discussions running in parallel.

One : it is EASY to port the SOURCE CODE of an application to take advantage of quad-buffered-stereo or even anaglyph rendering.

Two : it is FUNDAMENTALLY FLAWED to expect Nvision (or any other system) being able to guess magically how to turn a mono application to stereo rendering without serious artefacts or loss of effects.

Agreed, on both points.

To get correct stereo, you need to create an off-axis projection matrix, and offset the view matrix to compensate for the shift in the projection matrix. That can be done in a vertex shader, I suppose.
You need to know a few things about the world you’re viewing too, as well as the viewer - like how far from the monitor (or projector screen) the viewers eye is (and indeed its position if you’re doing tracked stereo)…where to put the near plane, because the near plane can be any point behind or in front of the screen. You also need to know the real world size of the screen (monitor/projector screen).
That’s an awful lot of stuff to try and imply in the driver.
I’d recommend you get yourselves a quadro (the 5 year old ones can be picked up second hand for next to nothing), get some shutter glasses and emitter (bit more expensive), and get a wii-remote. Download the wii bluetooth software, put some reflective pads on the shutter glasses, put the wii-mote on top of the emitter, and hook it all in to your renderer.
Then you’ll see something you never thought possible for so little money. Incredible stuff.

So then , how is that shader run through? You switch to projection mode, then run it? How do you account for the two positions?

Well how to make a 3D image I know how in code, it’s the shader part I’m interested in. If we’re talking about calculating for stereo, Binocularity.org is actually a source in my research, along with a webpage on Calculating Stereo Pairs. Example code too. Adjust eye separation in the in-game settings, and I think the basics are set.
As far as game development though, I’d rather not get something so old. I need a little oomph, which is why I’m looking at the Quadro 580 (I’m a student, so I need to go a little cheap). I already have a Geforce 9400 PCI connected, which is OGL 3.1 and CUDA capable, so I have a good deal of the heavy lifting taken care of for general development.
It’d be interesting as well to program head movement, but right now I’m developing on a iZ3D 3D monitor (passive polarizaion, RealD 3D type glasses, no shutter here!), so I’ll worry about head tracking later.
Wow… that post kinda sounded like an advertisement…

@me262: you render twice, once from the viewpoing of each eye (with slightly tweaked projection matrices). The first rendering goes to the left backbuffer, the second to the right backbuffer and that’s it. You don’t need to modify your shaders, you just need to run them twice!

As ZBuffeR said, this is really easy (trivial even) if you have access to the source code: just run the rendering step twice with the correct projection/modelview matrices. However, this is impossible to implement automatically through the driver in the general case (which is what nvision is trying to do).

yes, it actually is very simple. the projection matrix has to be based on an off-axis projection (asymmetric frustum) and the viewing matrix has to be transposed by the eye offset. that is basically it, and it is something that can IMO not be done on the fly because the hack most definitely will assume wrong things from the projection and viewing matrix (if it is able to identify them, and then they might be passed as composites to the shader, which would mean game over!).

sad story again nvidia! i was on this train before 10 years ago and now history seems to repeat itself with nicer gadgets (wireless glasses and lcd screens)…

Nice! So are there two shaders for the same effect, one for regular, and one for stereo? That is one of my main concerns.
Sadly, my OpenGL class went to shader-based the month after mine, so I don’t have much experience with them. I’ll be auditing the class after I’m done with school, if not sooner. [/QUOTE]
as ZbuffeR stated, the shader will not know about the viewing setup, it just gets different projection and viewing matrices as uniforms. and there is the problem with the nvision way, you may deliver the composite matrices for model_view and model_view_projection and the 3d-hack-driver has no way of adding the correct offsets and correct projections…

there is a nother problem, it might only patch the viewing matrix which will end up giving you false stereo parallax without the correct off-axis projection.

exactly. The scale of the scene cannot be implied either. NVidia is most definitely not the way stereo is ‘meant to be played’.

I’m encouraged to see this being discussed. I was beginning to think nobody in realtime graphics cared about stereo projection. I’ve tried to set up discussions on the subject over the years, with little interest. But now we’ve got consumer 120hz displays, things should change. We should set up a petition to get nvidia and ati to expose quad buffered stereo pixel formats.

Now that RealD 3D and Avatar have really pushed the boundaries, I expect Stereo Projection will be gaining more popularity. I don’t mind them not exposing it, I just want to see QBS on the consumer cards now, even my Mobile Quadro 1600M doesn’t have QBS!
I’m all for the petition! Sign me up!

I’m lost. What do you mean you don’t mind them not exposing it, but you want to see QBS on the consumer cards?
That’s almost the definition of a contradictory statement.

Yeah, that’s a little weird. Personally, I’m very very irked at Nvidia not exposing QBS on my Quadro NVS-based laptop. That’s just low.

So even the "buy Nvidia Quadro to have Quad Buffered Stereo " argumment does not hold.

It would be a good idea for ATI to expose QBS everywhere…

we should not forget that the “buy Nvidia Quadro to have Quad Buffered Stereo” argument also means for us to get a 3000$+ very high end quadro card for comparable performance characteristics as our current 400$ consumer pendents. i mean who would want a Quadro FX 1800 (400$, mid range) for stereoscopic rendering which is basically equal to an GeForce 8800GS (100$, low end)? It is a joke, i do not want to just look at still images or images rendered with OpenGL 1.5 level quality at real-time… and i am sure potential customers also do not want to!

It would be a good idea for ATI to expose QBS everywhere…

I can’t believe I’m even saying this, or that this is happening. But ATI is going to do Quad Buffer Stereo on their Radeon parts. ATI is now more on the ball with OpenGL than NVIDIA.

It is the sixth sign of the Apocalypse.

Come on NVIDIA, you’re the ones with the actual 3D vision hardware. Get it done, already!

really good news, but no mention of OpenGL in the article. i hope this is not just a proprietary D3D extension.