How to generate stereo manually

That’s obviously a plus for having a quadro card. Which makes me wonder how Nvidia handles that problem in their 3d vision driver…
You basically need to know your framerate BEFORE you render.

I assume the technical details are not going to be revealed by Nvidia.

An idea: You could render both eyes into textures and decide after rendering which one should be displayed before swap. This way a switch of left to right might be avoided.

This is not complex for the card, whether it is a quadro or a consumer geforce. The driver has the hardest part in consumer 3D vision mode, as all drawing commands between 2 swapbuffers must be stored, then duplicated, then each projection has to be tweaked respectively to the left and to the right. This step is the most problematic, often nvidia advises to disable ingame shadows to prevent problems… INSTEAD OF JUST PROVIDING THE CORRECT API TO GAME PROGRAMMERS, WHICH ALREADY EXISTS AS QUAD BUFFERED STEREO !!! grmbl, sorry.

Then both command batches are send to the card, one drawing to FBO1, the other to FBO2, then the driver goes on with next user supplied frame. In parallel the card loop the display of FBO1 then FBO2 after each vsync, in sync with shutter glasses left and right eyes (using interrupts).

One big problem is the black box delay added by LCD screens, wich prevent some users from seeing 3D…

The driver has the hardest part in consumer 3D vision mode, as all drawing commands between 2 swapbuffers must be stored, then duplicated, then each projection has to be tweaked respectively to the left and to the right. This step is the most problematic, often nvidia advises to disable ingame shadows to prevent problems…

I always imagined that they took the framebuffer and depth buffer, and used those to reconstruct something not entirely unlike the actual scene from a slightly different perspective. Shift the depth values over a bit, using parallax to move some pixels more than others. That kind of thing.

I can’t imagine how they could develop an algorithm that can, in 100% of cases, figure out what the “projection” in a vertex shader is, and then modify it for the other eye.

INSTEAD OF JUST PROVIDING THE CORRECT API TO GAME PROGRAMMERS, WHICH ALREADY EXISTS AS QUAD BUFFERED STEREO !!! grmbl, sorry.

Even if they did, it would mean nothing to already existing applications. You can’t sell a product based on what a few games may provide in the future; that’s what sunk physics processors (among other things).

That being said, it would be nice if they could provide this for new applications. So that backwards compatibility can be provided via this hack, while developers that actually want to support this can do it the right way.

Well it works in much less 100% of cases, and new games take some time before a driver patch can make it work well enough with 3d vision. And it often comes down to disable shadows, or switch off some multipass effect to be playable.
Extracting left and right from depth would just leave holes around a tree for example.

Even if they did, it would mean nothing to already existing applications. You can’t sell a product based on what a few games may provide in the future; that’s what sunk physics processors (among other things).

Hell they already have TWIMTBP that among other things certifies a correct operation with NV 3D vision. Not much more complex to handle stereo from the ground up, as done in Quake 3. At least give this possiblity …

There is a big difference with physics accelerators, because it was extra hardware to buy. 3d stereo is just a patch away, so the barier to entry is much lower. And it does not touch the game mechanics in any way, contrary to physics.

dead???
Do you mean that the nVidia Vista driver implementation of OpenGL does not support it? And is this the same for Windows 7?
But nVidia sells this 3D Vision package.
[/QUOTE]

Yeah it’s dead. Even with a quadro card you can’t set a quad buffer pixel format in Vista/7. You can set one fullscreen, but you can only shutter with it, which is fairly useless for most people. The nvidia driver just generates stereo automatically for you in the driver. It may or may not work correctly. There is some guide of things you should not do if you want their driver to work with your prog.

What if you disable that silly Aero thingy.

In Linux, there’s a hot-key for that.

Now the GS250 was not going to work with quad buffer I searched for an effordable Quadro card.
I managed to get an old FX4600 :slight_smile:
I now can indeed render to a quad buffer!
The resulting view shows two pictures overlaying each other.

But the 3D Vision shutter glasses does not switch on.
I unstalled the old 3D Vision software and installed the Quadro version. Doesn’t work.
When checking the display settings I can enable 3D and select a 3D display type. Like ‘generic display with IR emitter’ or ‘3D DLP’ (with and without IR emitter). But there is no ‘3D Vision’ option to select.

Can anybody give me a hint what the problem could be?

Did you read this page ?
http://www.nvidia.com/object/quadro_pro_graphics_boards.html
How is connected the IR emitter to the sync system ?

EDIT: and windowed stereo should work on Vista too, since recent drivers :
http://forums.nvidia.com/index.php?showtopic=95729&st=20&p=557838&#entry557838

Yes I did read that.
I do not have this Din cable.
I was assuming it should work with the USB connection and you can improve synchronisation using the Din cable.
But maybe I’m wrong.
Here in Europe jou do not get the Din cable with it.
I could easily make one if I know the connection diagram.
But I can not find that information.

Just read the whole topic above, found this : http://forums.nvidia.com/index.php?showtopic=98067
Both how to build the cable and how to ask Nvidia support to get you one (not clear if paid or not…)

Thanks

I ordered the connectors. In a few days I will be ready to try it with cable.

But shouldn’t the glasses work with the USB emitter without the extra Din cable? Or is the extra cable really needed for a Quadro?

I’m asking because I’m afraind something else is wrong.

Well those are questions for nvidia…
But according to their forums, yes, it is compulsory for quad buffered stereo.

Ok, talked with nVidia support.
I had to uninstall and remove eveything from nVidia and start installing everyting one by one. Booting several times inbetween.

Now it works. I do not know why, because I did it before. But probably I did something different (booted one time more?).

I also have the ‘Stereoscopic settings’ tab with the test application again, which I had with the GS250 but not with the FX4600 before.

And OpenGL quad buffers works :), even in a window (not fullscreen). This without the extra mini Din cable!

I’m happy now and can start experimenting with 3D stuff!!

Thank you all for the hints, suggestions and feedback.

Hi guys,

I’m trying to create a 3D app also. So I’ll just do a recap of the things that you said to make it work:

  1. Add the PFD_STEREO for the pixelFormat structure.

  2. Draw one instance with
    glDrawBuffer(GL_BACK_LEFT);
    and another one with
    glDrawBuffer(GL_BACK_RIGHT);

  3. Don’t use GTS250 ( I also tried on that one and my app crashed with GL_BACK_RIGHT) and use a high-end quadro (i got a Quadro FX 3700 ) card instead.

I’m currently in the process of installing that card and I’ll let you know what happens next.

I’m writing this to confirm with you if I’m going in the right direction.

One funny thing, when I tried the Nvidia demo 3D video (with the nvidia player) on the GTS250, it worked fine. And the Nvidia website says that GTS250 is 3D vision ready.

ref: http://www.nvidia.com/object/3D_Vision_Requirements.html

So if that card does not support Quad buffering, how do they achieve the 3D effects? They also give a list of games that are compatible with those cards.

I’m asking this because I want to know if there’s another way of creating stereoscopic vision without quad buffering.

Do let me know.
Tnx.

matchStickMan, did you actually read this this thread ?
Nvidia 3D vision do not provide an API to do the stereo yourself, it guesses it for the game, as opposed to quad buffered stereo.

Right. More on that here:

From the first, you can see there’s this NvAPI Stereo layer on top of D3D. From the last, you can download NvAPI. Inside you’ll find a header and a help file, NVAPI_Reference_Developer.chm, documenting the 3D stereo module APIs (use kchmviewer or chmsee on Linux).

Also from the presentations you do get the jist that this isn’t as simple as quad-buffer stereo. It essentially states that it is behind-the-scenes driver magic. Witness the shared cull frustum for both eyes, vertex shaders auto-modified by the NVidia driver, the driver using the w coordinate to render to left/right eye views. I’ve done stereo and multi-frustum displays before (the underlying concepts are simple), but frankly I came out of their SIGGRAPH presentation confused as to what they were doing or how I’d code for it if I ever wanted to.

It seems to be very important to remove the old nVidia grafphics card driver and 3D Vision driver as good as possible.
Thus uninstall drivers, remove applications. Reboot.
Install new card driver. Reboot. Make shure it is installed correctly. then install 3D Vision. The nVidia Control Panel should now have a page where you can launch the test application. It that works, I think you are ok.

Hey tnx Dark Photon, those links were very helpful.

I’ve managed to setup the 3D stereoscopic view from the nVidia Control panel.

my app is still crashing though. I’ll just keep debugging it.

I too got a 3D setup for my NV GTX 260 with hopes of testing my OpenGL applications in stereo… Somewhere the marketers who decided to support only D3D stereo forgot to tell the advertisers and the vendors.

Had I seen “OpenGL not supported… at least not without GLDirect” I wouldn’t have wasted the money. This is the second time nVidia has disappointed on a big ticket research item for me. First was when I got a quadro card years ago to use overlays only to hear that Vista would be eliminating all overlay support.

How can they release products with such potential and just cripple them with ludicrous driver limitations? Full screen D3D only? My app targets mainstream for peeps without a $700 Quadro. Now they’ll have to figure out anaglyph glasses with messed up color. Thanks a lot, nVidia… I’ve liked them for their Linux support but I think I’ll take another look at ATI in the future.

FYI for some of you, GL Direct (Direct 3D -> OpenGL wrapper) may be a possible option.

http://sourceforge.net/projects/gldirect/

How does one set fullscreen mode? That has to be done in the CreateWindow() call? I seem unable to confirm stereo mode. :frowning:

pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_GDI | PFD_DOUBLEBUFFER | PFD_SUPPORT_OPENGL | PFD_STEREO;

int iPixelFormat = ChoosePixelFormat(hdc, &pfd);

BOOL bSuccess = SetPixelFormat(hdc, iPixelFormat, &pfd);

iPixelFormat = GetPixelFormat(hdc);

DescribePixelFormat(hdc, iPixelFormat, sizeof(PIXELFORMATDESCRIPTOR), &pfd);

if ((pfd.dwFlags & PFD_STEREO) == 0)
MessageBoxError(“NO STEREO!!!”, “OpenGL”);
else
MessageBoxInfo(“STEREO YES!!!”, “OpenGL”);