So what’s why i get fully working scene copied to layered window on win7 with gtx 295, and a piece of desktop covered by invisible window on XP with GF6150? Or thats because i don’t get how “CreateCompatibleDC”, “CreateCompatibleBitmap”, “SelectObject” works?
That code works under win7 with gtx295(wnd1-visible window DC, wnd1c - visible window Compatible DC):
HBITMAP bm = CreateCompatibleBitmap(wnd1, screenw, screenh);
HANDLE old = SelectObject(wnd1c, bm);
SetDIBits(wnd1c, bm, 0, screenh, ptr, BmpI, DIB_RGB_COLORS);
BitBlt(wnd1, 0, 0, screenw, screenh, wnd1c, 0, 0, SRCCOPY);
I don’t think it’s really valid. It works with posted earlier PBO\read pixels code. It started to work after i removed binding of second PBO before calling glMapBufferARB(why? i have no clue. i’m exhausted by that BS);
So if i’ll add FBO(i have no idea how, yet), is there any significant functional(at least for my target) difference between GL_EXT_framebuffer_object and GL_ARB_framebuffer_object. I mean 1st one has wider support, but can i rely on it?
And my PBO is fixed by allocating it manually and copying data like this:
memcpy(ptr, (unsigned char*)glMapBufferARB(GL_PIXEL_PACK_BUFFER_ARB, GL_READ_ONLY_ARB), screenw*screenh*4);
What a shame to do mistakes as in my previous posted code.
And suddenly...I recalled OGL Layered Window demo i've seen long time ago:
It uses WGL_ARB_pbuffer(supported even by your microwave oven). There are sources. Anyone pro can take a quick look at them and confirm this method will not fail, please? I some kinda doubt it because author renders to WS_EX_LAYERED.