Can’t answer about the DIB, but about detecting hardware rendering: when you choose a pixel format, you can use DescribePixelFormat to get the description of the format, and see if it will use an ICD or not. If it has the PFD_GENERIC_FORMAT flag, then it uses the Microsoft software renderer. If it has then it uses an MCD (an old OpenGL acceleration architecture nobody uses these days). Otherwise it uses an ICD.
The problem is, you can’t tell if the ICD will render in software or in hardware. However, if you do have an ICD mode available, the minimal set of abilities (mainly no stencil) should give you hardware acceleration.
Thanks for answering my question. I already using DescribePixelFormat function. As ET3D wrote, these still chances, that rendering is performing using software instead of hardware.
Problem is, that i need rendered data for further processing.
For this i’m using my own pixel format (24bpp) and memory device context. This means, that all rendering performs without using hardware acceleration (.
I know, that hardware accelerated rendering strongly depends from current video mode.
But why is that?
Why if i don’t want to rely from current video mode, but still use hardware rendering?
Is there any way to tell GPU to render using 24bpp format to offscreen frame buffer instead of current video mode color depth (suppose GPU supports 24bpp pixel format)?
Does anyone know, is there any cards/drivers which can support this feature and what do i need to use it?
j - oops, I guess I didn’t really paste when I thought I pasted
DrDeath, consider that regardless of how you do it, 24 bit will never be accelerated. All current accelerators (and those of the past few years) can accelerate in 16 bit or 32 bit only.
Originally posted by ET3D:
DrDeath, consider that regardless of how you do it, 24 bit will never be accelerated. All current accelerators (and those of the past few years) can accelerate in 16 bit or 32 bit only.
Well, 24 bits - it is a RGB triplet, means each color component have it’s corresponding byte. 32bpp - good too, but not 16bpp!
I hate 16bpp, because it’s color compression.
With 24/32 bpp image i can do whatever i want - antialiasing/smoothing/alpha blending and other stuff, that i can write on asm. But even simplest image processing with 16bpp can become hell if you need a really fast image FX.