How to copy from a PBO to the default framebuffer?

Greetings:
So I bind and fill a pixel buffer with:

   glGenBuffers(1, &pixelBuf); 
   glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pixelBuf); 
   glBufferData(GL_PIXEL_UNPACK_BUFFER, size, data, GL_STATIC_DRAW); 

Now I want to draw those pixels on the screen. The following works:

glDrawPixels(imageWidth, imageHeight, GL_RGBA, GL_UNSIGNED_BYTE, BUFFER_OFFSET(0)); 

However, glDrawPixels() is gone from newer OpenGL. So how to get the job done? Btw, I know how to do it by making an FBO, applying glFramebufferTexture2D(), glBlitFramebuffer(), etc. I was just wondering if there is some way directly to get pixels from a PBO to the default framebuffer.

Thanks in advance.

You don’t; you render to the default framebuffer.

And it’s not like glDrawPixels was just a copy operation or something. It still respected a lot of post-fragment rendering state, like blending and so forth. Indeed, if I recall compatibility GL correctly, you can even run a fragment shader with glDrawPixels.

So really, you may as well just upload it to a texture and render/blit.

Thanks, Alfonse.

“glDrawPixels — write a block of pixels to the frame buffer” from OpenGL docs. I take it write = render.

In any case, glDrawPixels(width, height, format, type, data) is really convenient for what I want to do which is move a rectangular image around the screen. All I had to do was load the image into a PBO, render from the PBO using glDrawPixels(), updating the raster position at each tick.

So I guess there is no simple replacement for glDrawPixels() and, therefore, no way to use a PBO. In particular, I have two options:

(1) Attach the texture as the color buffer of an FBO, blit to the default frame buffer.
(2) Render a textured rectangle and move that rectangle around using modeling transforms.

(1) Attach the texture as the color buffer of an FBO, blit to the default frame buffer.
(2) Render a textured rectangle and move that rectangle around using modeling transforms.

These are both easier and faster than your PBO approach. And I wouldn’t bother with #1.

You can. But not a vertex shader (as there aren’t any vertices). So where would the fragment shader get its inputs? It can’t read user-defined variables because there’s no vertex shader to provide values, and the compatibility variables don’t exist in the core profile. Ultimately, having glDrawPixels() in the core profile would be a mess of contradictions.

Thanks again but I am curious:
Why wouldn’t you bother with #1? Firstly it’s real easy code + why bring in a textured rectangle (which means texture sampling) when all that’s asked is a one-one (texture) pixel to (default framebuffer) pixel render. Seems to me drawing the rectangle + texturing is a kind of roundabout.

Why wouldn’t you bother with #1?

Because it involves a lot of state setting. Expensive state setting. Framebuffer object state is literally the slowest state there is (at least, in one implementation, but I wouldn’t expect it to be much better in others). Admittedly, that was measured for destination framebuffers, but the point is still the same. If you think you’re going to be throwing hundreds or thousands of them up there per frame, I would think again.

It’s also not very flexible. If you suddenly have a need to do anything you can’t do with a blit (rotation, blending, etc), you basically have to go to #2. So you may as well start with the solution that’s guaranteed to work.

It also scales a lot better in terms of destination resolution.

Thanks, Alfonse, for a very clear answer.