I’m not sure if this is the best place for this question, but I thought I’d take a stab at it. Forgive me if this is off-topic for this board.
In addition to doing 3D rendering, I’m very interested in using NVIDIAs CUDA API to do general computing with the massively parallel GPU farm on their high-end cards.
However, most of the applications I have in mind require double precision floating point. I want direct support for floating point in hardware, not software double precision running on single precision hardware.
I’ve poked around the NVIDA website, and found (old) comments from NVIDA insiders that they expected to release cards that support double precision math in late 2007. I haven’t found any current information however.
Do any of you have more current information?
I’m dying to get my hands on a card that will give me 128 double precision floating point processors to set loose on my math-intensive applications.
Cool. Thanks for the info. Is there a log of the talk somewhere? I’m new to the world of OpenGL and graphics cards, so I don’t even know who David Kirk is, or where to watch for announcements. Any further info would be appreciated.
David Kirk is this guy. He made a tour through some German universities at spoke about CUDA and GPGPU. I don’t know whether the talks are archived somewhere, sorry. Maybe you could ask in NVIDIA’s CUDA forum.
I’ve just seen a conference of David Kirk at UPC (Universitat Politecnica de Catalunya) in streaming about CUDA, and in the final round of questions, someone asked about that. Kirk just said this summer.
Maybe something related to the GT200 that’s to be launched this summer, too (readed at TheInquirerES)?