I’m just wondering, if anyone tried copying data from textures to vertex buffers using PBO? If I understand correctly, first you’d need to render an image into a buffer (eg. an FBO renderbuffer), then use glReadPixels to read the data from this buffer into a PBO, then use the PBO as a VBO to render… Now, this sounds a little slow to me. Are there any demos, etc… demonstrating this technique in action? I’d probably want to do this every frame, and copy at least 128KB of data into the VBOs. Does this sound possible?
You might as well use the frame buffer, if you want to do glReadPixels eventually!
PBO and FBO are two different extensions, serving almost the same purpose. Both are render target extenions and are used to render data into a render target which can then be used as a normal texture to apply whatever was rendered to any geometry (just like normal texturing with a slight difference in syntax when binding).
On a side note, PBO and FBO are used seperately and have absolutely no relation in between them. FBO is easier to use and much better than PBO, and would definitely be the preferred way of using render targets, in case its supported by hardware.
So much for PBO and FBO. Unfortunately i didn’t quite get what you are trying to achieve over here, so it might be better clearing up your question a little bit.
hum, as I understand it, PBO isn’t a render target per se (as FBO is) : it’s just a mean to manage pixel-level data in video memory. Anyway, I’ve never used them, so I can’t be too sure.
To achieve render to vertex array, you render your geometry, and map it to a floating point PBO (using PIXEL_PACK_BUFFER_ARB or PIXEL_UNPACK_BUFFER_ARB as target in BindBuffer(), BufferData(), …), then you bind these same buffer objects with ARRAY_BUFFER_ARB as target, and use them as VBOs.
An example of what you can do with it can be found here : http://download.developer.nvidia.com/developer/SDK/Individual_Samples/samples.html#gpu_particles
NO glReadPixel() involved !!
You have to do following steps:
- Create render target using pbuffers or FBO.
- Create RGB texture and store vertices. You can use any RGB(A) texture format. Set nearest filtering!!
NOTE! render target and texture must match dimensions - Create PBO buffer
- Select rendertarget, bind “vertex” texture
- Render screen aligned quad using your fragment shader
- Bind PBO and do glReadPixels in PBO
glBindBuffer(GL_PIXEL_PACK_BUFFER_EXT, PV_buffer);
glReadPixels(0,0,TEXX, TEXY, GL_BGRA, GL_FLOAT, BUFFER_OFFSET(0));
glBindBuffer(GL_PIXEL_PACK_BUFFER_EXT, 0);
- Restore rendertarget to back buffer
- Now rebind PBO buffer as VBO buffer and setup vertex pointer
glBindBuffer(GL_ARRAY_BUFFER, PV_buffer);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(4, GL_FLOAT, 0, BUFFER_OFFSET(0));
- Setup vertex pointer
- Render geometry.
There is a few more things that you can try. Usually, you need normals too and you can calculate using MRT (multi render target) or using separate pass after vertex processing pass.
You must take care of precission. Make sure that render target is RGBA32f. Also, keep in mind that only NV gaming hardware offers 32bit precision in pixel shader. ATI have 24 bit precision, so some error may occure.
I can send you my demo if you want.
@wizzo:
glReadPixel() is involved… Check following code…
// read frame buffer to main memory
void
ParticleSystem::ReadBack()
{
m_pos_buffer[m_current]->Activate();
glReadPixels(0, 0, m_w, m_h, GL_RGB, GL_FLOAT, (float *) m_pos);
m_pos_buffer[m_current]->Deactivate();
}
yooyo
Actually, I just want to generate a heightmap this way. I don’t think I’d need floating point precision. I could just use a 16bit RGB texture, and a similar renderbuffer. Of course, my VB will contain 16bit integer vertices (height value). Also, I believe that sizes of the renderbuffer and the texture can be different, as I want to be able to scale on the fly (have heightmap at different LODs). Obviously, I would use nearest filtering, otherwise the interpolation of heights would be incorrect.
I’ve seen the nVidia demo, but unfortunately it doesn’t work on my GF6600 with current drivers. They said it’s a driver issue, so I’m waiting for the fix.
In the mean time, Yooyo, I’d love to see your demo! I just want to get an idea about performance, that’s what I’m most worried about.
EDIT: I do realize that using 16bit textures brings some complications, so in the end I might just go with 32bit fp instead…
Keep in mind… You must provide 2,3 or 4 component coordiante (short, int, float or double) in glVertexPointer call. So calculating height only is not enough.
you can download my framework from:
http://www.centroid3d.co.yu/yooyo/GLFramework01.zip
Open project, open subfolder Demos/Heat and comment last line. Open subfolder Demos/Test_PBO_VBO and uncomment last line. Rebuild.
If you have rebuilding trouble you need DXSDK installed and coeerct path to DirectShow, or just remove all DShow stuff from project.
Test_PBO_VBO is nice example how to process vertices using pixel pipeline. In this example, one wave texture is created and it is rendered in rendertarget (rtt), but each pixel is modulated by provided color (via glColor call). After vertex processing pass, pixels from rtt is readback to pbo buffer. Same pbo buffer are rebinded as VBO.
yooyo
Originally posted by yooyo:
Keep in mind… You must provide 2,3 or 4 component coordiante (short, int, float or double) in glVertexPointer call. So calculating height only is not enough.
But using VertexAttribPointer I could send one float per vertex, no? And the other two components are constant, so I would store them in a separate static VBO.
Thanks for the demo!! I’m gonna take a look at it right now!
EDIT: Just watched the waves, looks great! So what do you think, is this method faster than doing a texel fetch in the vertex shader?
According to table from ARB_vertex_program spec:
Normal
Command Sizes ized? Types
---------------------- ------- ------ --------------------------------
VertexPointer 2,3,4 no short, int, float, double
NormalPointer 3 yes byte, short, int, float, double
ColorPointer 3,4 yes byte, ubyte, short, ushort,
int, uint, float, double
IndexPointer 1 no ubyte, short, int, float, double
TexCoordPointer 1,2,3,4 no short, int, float, double
EdgeFlagPointer 1 no boolean
VertexAttribPointerARB 1,2,3,4 flag byte, ubyte, short, ushort,
int, uint, float, double
WeightPointerARB >=1 yes byte, ubyte, short, ushort,
int, uint, float, double
VertexWeightPointerEXT 1 n/a float
SecondaryColor- 3 yes byte, ubyte, short, ushort,
PointerEXT int, uint, float, double
FogCoordPointerEXT 1 n/a float, double
MatrixIndexPointerARB >=1 no ubyte, ushort, uint
Table 2.4: Vertex array sizes (values per vertex) and data types. The
"normalized" column indicates whether fixed-point types are accepted
directly or normalized to [0,1] (for unsigned types) or [-1,1] (for
singed types). For generic vertex attributes, fixed-point data are
normalized if and only if the <normalized> flag is set.
you can’t pass 1-component as vertex position, but you may try to pass position as texture coord and somehow in vertexshader use this information.
yooyo
@yooyo :
RIGHT ! sorry I spoke too fast, I knew the demo was about that, but hadn’t checked it out myself
Originally posted by yooyo:
you can’t pass 1-component as vertex position, but you may try to pass position as texture coord and somehow in vertexshader use this information.
Look at VetrexAttribPointerARB! You can use that to send arbitrary data, whether it’s position, texture coordinate, temperature, mood factor, or whatever you might come up with!
Of course, if you use it to store the position, then you cannot use gl_Vertex in the shader, you have to add your own “attribute vec4 position”, or something like that…