pointer on buffers ?


I will like to pointer on a buffers.

Example :

opengl_color_buffer[x+y*W] = color;

How to pointer ?

I know “glDrawPixels” and “glReadRixels”. But they do a COPY. Me, I want DIRECTLY read/write the memoire. BECAUSE is fastest !

There’s no interface in OpenGL to access color buffers via a pointer.
Hardware can put buffers whereever and in whatever format it desires.
Exposing a pointer to such data whould also mean to expose the layout. This is not in OpenGL’s sense of abstracting hardware.
And you’re wrong, directly accessing the framebuffer memory is not the fastest method to get data there.
Describe exactly what you’re trying to do and what the actual problem is.

That’s not 100% right. You can have a mapped buffer using EXT_pixel_buffer_object with glReadPixels and glDrawPixels.
But Relic is right, there is no direct way to get access to the color buffer.

I simplify :

I want COMPARE several rendereds. (1 000 000 of rendereds)

I simplify :

I want COMPARE several rendereds. (1 000 000 of rendereds)

For(t of 1 to 1 000 000)
image1 = render()
image2 = render()

sums[t] = 0;
for(i of 1 to pixel_number)
sums[t] += abs(image1[i]-image2[i])

Do(compare between the sums)

I don’t know OpenGL do the sum of pixels !?

I work on windows XP. I do the rendered in Bitmap 32 bits (with CreateDIBSection function). Then I have got a pointer on the bitmap. But I have only RGB color, NOT alpha bits. I NEED them.

Note : I see glReadPixels function with the RGBA bits and they correctly work. But I do NOT COPY useless.

So, how to be fastest ?

OpenGL can do the sum of multiple versions of the same pixel using the accumulation buffer. This is hardware accelerated on later graphics cards.

You can also use ReadPixels() to read data back to memory that you can touch. Read GL_BGRA/GL_UNSIGNED_BYTE format data for best performance. (There’s also the NVIDIA-specific GL_NV_pixel_data_range extension for asynchronous reads).

Remember that OpenGL is a client/server specification – the framebuffer may not even live on the same machine as the running program.

Thank you !

I think the solution of my problem is …


Because it has to allow to save a rendered with RGBA colors. Then not copy useless.

But it’s new for me the extension of openGL. And there are not many samples about this.

WGL_ARB_buffer_region will not solve your problem. this extension allows you to save the screen into a buffer and to restore it.
but there is no way to access the stored data directy.

OK !

So, nobody know how render in bitmap with full RGBA bits !!??

Moerover I think the problem come from Windows (not openGL). Then I move to “OpenGL under Windows” Forum and I post new topic “RGBA Bitmap ?”.

Maybe I will be more luck

Hum you don’t understand.

It doesn’t make sense to get a direct pointer to a buffer, because how data (typically pixels) are stored in a buffer is implementation dependant.

While it’s usually not the case, let’s imagine the color buffer is stored compressed. Now if you get a pointer to that, it’s not going to be very usefull, is it ?

So there’s no direct way to get a pointer to the color buffer, and it’s perfectly normal IMO.


Of course Ysaneya !

Because OpenGL … client\serveur …

But me, I am a CLIENT. And a client wants a RESULT !

With Windows, the RESULT of OpenGL is selected with “hrc = wglCreateContext(hdc)”

Where I choose the type of “hdc” that I want with :

32, 8,0, 8,8, 8,16, 8,24,
0, 0,0,0,0,

Here I want to render in the BITMAP with RGBA bits. YES ! WITH ALPHA.

BITMAP == pointer of pixels.
This works very well BUT without ALPHA bits

Because in the old documentation of 1995 :
cAlphaBits :
Specifies the number of alpha bitplanes in each RGBA color buffer. Alpha bitplanes are not supported.

---- Alpha bitplanes are not supported. ----

We are in 2004 !

I am a client and …

I don’t want to render WINDOW with RGB 32 bits (8 of them unused)
I want to render BITMAP with RGBA 32 bits (8 of them USED to ALPHA)

I don’t understand ? :wink:

you can not get direct access to the framebuffer.
c’est comme ça, c’est la vie, il faut savoir l’accepter c’est tout.

However, I think you can use the hardware to compute the statistics you want, using an algorithm like this one :

1- render your first image.
2- move it in the accumulation buffer, using glAccum(GL_LOAD, 1.f);
3- render your second image
4- subtract it to the accumulation buffer, using glAccum(GL_ACCUM, -1.f);
5- remap the accumulation buffer to the range [0,2], using glAccum(GL_ADD, 1.f);
6- send the accumulation buffer contents back to the color buffer, with mapping to the [0,1] range, using glAccum(GL_RETURN, .5f);
7- read back the value of the color buffer using glCopyTexSubImage2D
8- use hardware mipmapping thanks to glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP_SGIS, GL_TRUE)
9- read back the last mipmap levels thanks to glGetTexImage2D … because you look at the last levels, the readbacks will be pretty fast.

In the end, the readback pixels correspond to mean differences of the produced images. Just multiply the differences by the surface that each mipmap represents and you’re done.

Please note that all steps from 1 to 8 are done in the hardware (well, if your hardware supports accumulation buffers, which is the case for all DX9-compatible hardware, eg GeForceFX and Radeon9500+) so it should be pretty fast to compute. The only difficult part is to rebuild the whole stats when your image is not power-of-two sized, because you need to access more than the last mipmap level with some complex yet powerful mathematics.

Thank Vincoof.

It’s good idea.

I will study that and Pbuffer so.