OpenGL purely for image processing.... ?

Hi folks,

I just joined up, good to be here. I used to write OpenGL games when I was 17-18 but other things got in the way (25 now). Now I’m back! But in a different context.

The problem…

Context:
I am writing an application that renders images, then passes these images to FFMpeg to create an mpeg video of what I’ve rendered. This means I don’t actually have to render to the screen. In fact I need the information in YCrCb format, so I render the image, ‘get’ the RGB data, convert to YCrCb and continue the next frame. When all frames have been processed I pipe it to FFMpeg as ‘yuv4mpeg’ format.

Question:
Can I set up OpenGL such that it isn’t rendering to a window, but rather only a buffer? Ideally the application shouldn’t have a window at all.

Question 2:
If I want to do this in parallel, can OpenGL render as above without crazy performance expense in ‘switching contexts’?

Thanks for any help you can offer.
JB.

I’d have to check again carefully…but I think GL 3.1 allows the creation of a GL context without necessitating the use of the system defined Window.

In any case, it does not matter since you’ll probably want to create a FrameBufferObject (FBO) for off-screen rendering. These are fast and allow writing to a variety of texture formats and switching between FBO’s is a relatively low-cost option. A faster option is to switch between different textures within the same FBO, if that’s what you need to do.

Things have moved ahead rapidly in the last few years!

Thank for you the info =)

I’ll have a google around on the FrameBufferObject.

Hi again,

Sorry its been so long since I’ve played with this.

The call
‘glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT)’
is returning 0.

I presume the graphics card needs the GL_EXT_framebuffer_object extension to make this work? The PC I’m testing this on has a rubbish Intel 945G.

I used a program called GLView which has a database with it stating that this card doesn’t support it. I presume the only thing to do is buy a decent card?

What version of OpenGL did this extension appear in? Does the card support this extension in hardware, or would updated drivers help?

Thanks for the advice.
JB.

Your card supports the extension, but this functionality is not exposed on Windows drivers. Last time I checked, Intel’s drivers didn’t even support pbuffers, so no luck doing off-screen rendering with this card. Complaints to Intel. :slight_smile:

For what’s it worth, I think that FBOs are exposed on Linux intel drivers.

Other solutions:

  1. Buy a real graphics card.
  2. Switch to Mesa3d for offscreen rendering (no hardware acceleration this way).

Your presumption is correct, you’ll need to support either:
OpenGl 3.x
GL_FrameBufferObject_ARB
GL_FrameBufferObject_EXT

The FBO_EXT did not appear in any version of OpenGL - because it’s an extension and not core. Anyway, it appeared a few years back with GeForce 6 graphics cards and OpenGL 2.0 was just around back then.

To support OpenGL 3.x and FBO_ARB you’ll need a Radeon 4xxx, Radeon 3xxx or Geforce 2xx, Geforce 9xxx or Geforce 8 series which are all top-notch and have the same underlying h/w capabilities. The older generation (Geforce 7/6) are to be avoided now as these are not DX 10 / OpenGL 3.x capable cards.

For what’s it worth, I think that FBOs are exposed on Linux intel drivers.

Fortunately Mesa dri drivers does support the framebuffer_object extension in the latest 7.5 release but it seem not stable yet. Should be fixed soon.
It is quite recent, since it has been released on july 17, I can’t wait to test it as soon as possible.

Thanks everyone for all the advice. Much appreciated.

Would I be right in saying that this isn’t going to work in a multithreaded fashion without alot of synchronization code? i.e. 2 threads start, each wanting to render their own data to the FBO, presumably there will need to be a call to wglMakeCurrent( *hDC, *hRC ) and consequently one thread will need to block until the other completes?

As opposed to software rendering, where each thread just goes off and does its own thing, no synchronization code required (assuming the data on each thread is independent).

Thanks again.

Since the data is independent you can create two contexts, one for each thread, and render completely in parallel.

Hi,

Excuse my ignorance on this subject.

Lets just say I need a window to use opengl, forgetting the possibility that maybe I don’t in OGL3.1.

So I get my DeviceContext from the hWnd. Say I do this

*hDC = GetDC( hWnd )

So say I now spawn 2 threads, each initialise with

*hRC = wglCreateContext( *hDC );

Don’t they both need to call

wglMakeCurrent( *hDC, *hRC );

In order to Render? If thread 1 calls this first, then thread 2. Won’t thread 2 be the active context? Thus I won’t be able to capture the rendered data from thread 1?

Thanks again, much appreciated.
JB

wglMakeCurrent has to be called from the rendering thread and will set the RC for only the thread wglMakeCurrent() was called from.

If your rendering threads are not waiting for CPU generated input using multiple threads will not gain you any performance.

OpenGL rendering was, and still is, single threaded.

Thanks for that…

OpenGL rendering was, and still is, single threaded

I guess this was the answer to the question I hadn’t yet formed in my head.

The application flow is…

Thread1: GPU_Render -> CPU_Process -> GPU_Render -> CPU_Process -> etc
Thread2: GPU_Render -> CPU_Process -> GPU_Render -> CPU_Process -> etc
Thread3: GPU_Render -> CPU_Process -> GPU_Render -> CPU_Process -> etc

Where all 3 threads may be processing at the same time.

I’m getting a better picture of everything now, for which I owe all contributors to the ‘thread’ (lol) a great thanks.

My only remaining question is what happens when the 3 above threads all make a call onto wglMakeCurrent in close proximity. Is wglMakeCurrent thread safe? Will the application intermittently fail?
Performance aside, is this supported? Will a ‘render cycle’ complete and block the other threads until they get the active context? Or will the ‘active’ status be stolen leaving a thread without the active context mid-render.

If the application won’t fail, I can run performance tests as to whether this ‘blocking’ is still faster than rendering purely in software.

EDIT:
I’ve seen resources on the web stating

If you wish to render more than one window at once, multiple rendering contexts are allowed… Another thing to remember is that OpenGL is thread-safe.

The complexity I have here is that I don’t have multiple hWnd’s, as this is all happening on 1 window. Unless of course I create an invisible window per thread, but that just seems wrong. Again, I’m looking to render to multiple textures in parallel to the FBO, data independent. If there were multiple windows, I would have multiple hDC’s and thus not need to worry about calling wglMakeCurrent in parallel.

Am I making sense? :eek:

Thanks Again
JB.

This could work for you.

Thread1: CPU_Process -> GPU_Render -> CPU_Process -> GPU_Render -> etc
Thread2: GPU_Render -> CPU_Process -> GPU_Render -> CPU_Process -> etc

Or, if your CPU processing needs (on average) twice as long as the GPU processing you could use 3 threads, etc.
Some timing logic would be needed.