Regarding Hardware accelerated bitmap blitting

Hi all,

I’m new to openGL.Cuurently I’m working on i5 proceesor which is having an integrated GMA HD.I found that DRI(Direct Rendering Infrastructure)is the path used for H/W acceleration.Using OpenGL,How can I do bitmap blitting which will use the harware acceleration path.Can any one provide a sample OpenGL bitmap blitting application which will use H/W acceleration?.Are there any documents available for H/w accelerated OpenGL API(like bitmap blit, bitmap blent,bitmap stretch blit etc)and there implementation.

Thanking you in anticipation,

Regards,
ShibuThomas

The simple way is to draw quads with triangle primitives, then apply texturing on them. Then again using some blending, or a fragment shader to combine each of them.

This should be hardware accelerated.

Seconded on the “quads with texturing” recommendation. The OpenGL 2D path is quite old, is unlikely to be hardware accelerated (or optimally hardware accelerated) on consumer hardware, and even less likely on Intel. You would only make trouble for yourself if you tried to use it the way that you want.

Your requirements don’t really stretch too far beyond the first few NeHe tutorials so - while they’re badly outdated for modern OpenGL - they should give you everything you need to get started.

http://nehe.gamedev.net/lesson.asp?index=01

mmm mumble. What we are talking about when we say hardware accelerated blitting?
To be hardware accelerated it must be on the video memory.
Sending the image to the GPU is some DMA transfer into some memory location of the VRAM. No acceleration here, only mapping the memory and use memcpy can help.
Then display that data into the framebuffer you have two possibility.
[ul][li]transfer the image as texture, and then display it on a quad.[/li] Pro: The static texture memory is the fastest
You can easily transform your picture (rotate, scale)
Cons: You are using the complete pipeline (vertex transformation, shaders to display the image)
If you want to transform the picture you are going againt a lot of aliasing problem and you have to use mipmapping (more memory, more transfer time)[li]Transfer the image in a FBO and the blit to the framebuffer[/li]Pro: Real blitting, if you use the correct parameter you are doing a raw copy from the source to the destination, all inside the GPU.
Cons: FBO memory is usually slower if you set the WRITE flags. Also you probably have some memory overhead
You can’t do transformation.[/ul]
Conclusion, try both and do performance test.

Is there any sample OpenGL test application for bitmap blitting.It will be really helpful if someone can provide some useful links.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.