Originally posted by blender: Damn, wrong guess. It didn’t speed up (
Let me guess…you are using some for of Radeon ? On NV hardware ( GeForce1, GeForce3 ) there’s usually a noticable performance increase when going to fullscreen. On Radeon 8500 it’s equally fast to run in windowed mode ( which is what I do most of the time ).
Originally posted by SirKnight: Gosh what a big difference in performance for rendering to a texture with those two methods. I’m going to play around with those two methods on my GeForce 4 Ti w/ latest drivers here and see what happens. I’m very curious now.
Good luck . I just ran a test on my GeForce3 and RTT was dog slow ( RTT: 42fps, CTT: 200fps ). In addition, there were some strange artifacts all over the image ( I’m looking into this right now ).
All is not well with ATI’s drivers either, I’m certain there’s a viewport bug there ( 1Kx1K pbuffer, viewport 800x600 at (0,0) ). NVIDIA’s drivers do it the right way in this case.
Well, I’m sticking with CTT for now as it’s fast, easy and works with both NV and ATI drivers.
Looks like NVIDIA still have some sort of viewport bug in their drivers too ( not the same type as ATI ). A shot of the artifacts .
The artifacts are only there when rendering to a viewport that’s smaller than the pbuffer.
Ok, found the exact issue with NVIDIA’s drivers. It’s related to using glScissor. Disabling the scissor test ( that I had set to the 800x600 region ) removed the artifacts.
PH, how can I use CTT, is there some extension for it or is it the same as glCopyTexSubImage() (copy to texture)?
Becouse in my gf2MX my fps is around that 42, and I’m thinking this is a slower option I’m using.
From zed’s list one can see that using BGR instead of RGB is much faster. But how can you set BGR as the internal format? I haven’t found anything like BGR8_EXT or similar that could be used in combination with glCopyTexImage2D…
GL_EXT_bgra provides additional texture formats ( BGR_EXT ). These are not internal formats but the source format ( the ‘format’ field in glTexImage2D(…) ). Yes, this should increase performance on NV hardware but I haven’t tried it.
[This message has been edited by PH (edited 08-28-2002).]
Just wanted to let you know that the bug(s) in NVIDIA’s drivers have been fixed. And you know what…performance of RTT is just as fast as CTT now . Now, as soon as ATI fixes their bug, RTT will suddenly be preferable to CTT.
Second, using seperate contexts seems to be just as fast as using a shared one ( with both NV and ATI drivers ).
I just downloaded the newest drivers and my tests showed that calling wglMakeCurrent is still very slow on a GF3, even if the rendering context is shared
BTW, the new 40.41 drivers crash if I try to access the graphics card options tab. Does anyone else have that problem too?
I had the desk.cpl crash also on my Windows 2000 box when accessing the adapter tab on the advanced settings.
The fix was relatively simple: First I downloaded the latest service pack (3) and installed it. After that I re-installed the new nvidia drivers and the problem was fixed - for me atleast.)
Reason why the desk.cpl crashed was that it used newer shell controls than what were installed with the previous service pack and I quess they simply didn’t provide fallback for older shell versions, but chose to crash the desk.cpl instead. Oh well, it is a beta driver but still…
I have now tested my app on AMD 1400MHz tb with geforce4ti and the fps is about 200 in windowed mode and hi-res and the scene is quite simple (~700 polygon model is rendered to texture and projected on simple ground).
Before implementing shadows. fps was around 380. Is this good or poor?
That sounds resonable. Don’t expect too much from the GeForce2mx. It’s slower than a GeForce1 DDR for just about everything. There’s a reason why it’s cheap .
But some games that uses this kind of shadowing technique runs well on my mx.
And the problem in here is not the 200fps, it’s good for a tiny scene like this, but considering a fullscale game done with this in another thing in terms of speed.