OpenGL + OpenVG

Hi all

Is there a way to render both OpenGL (3D) and OpenVG (vector 2D) content onto the same context?

Since both utilize EGL to render, will the two fight over the who gets the context? Or can they coexist without problems?

Here’s one suggestion I have heard:

  1. create EGL context
  2. render OpenVG to a back-buffer
  3. render OpenGL to EGL context created in step 1.
  4. render OpenVG back-buffer on top of EGL context (containing OpenGL renderings)

Will this work? Will the final context contain BOTH the OpenGL & OpenVG content?

Furthermore… can someone point me to how to get my desktop environment set up with both OpenVG and OpenGL?

Your help is much appreciated! Thanks in advance!

A context, by it’s very nature, is either Open VG, Open GL ES [or most recently Open GL], but not a combination of them.
A context is, if you will pardon OOP lingo, an instance of a single rendering API.

That said, it’s perfectly valid to create multiple contexts - one for each API you wish to render with, and simply use eglMakeCurrent() to switch between them (re-using the same drawing surface). This will get you the effect you seem to be looking for.

The only public implementation I am aware of that implements both OpenVG and OpenGL ES is hybrid’s old Rasteroid implementation (if you fail to find that [the has been down for a long time now], see if the Mesa implementation supports both at the same time - don’t know if it does).

Thanks Ivo for your reply!

My application would require OpenVG content to be displayed on top of the OpenGL content. Would this be possible via switching contexts using eglMakeCurrent()?

Thanks in advance for your help :slight_smile:

Yes - if the EGL layer supports both VG and GL ES, then it will.
The problem is only a few drivers support both.

Hi Ivo, sorry to be a bother, but here’s another question.

After reading through the OpenVG specs I found nothing on whether or not you could render OpenVG content to texture.

  • Is this possible by specifications?
  • If so, will the OpenVG texture be compatible/interchangeable with the OpenGL texture?

If you have EGL and want to render to a VG texture - sure.
Use eglCreatePbufferFromClientBuffer() to create your surface.
If intending to draw to a GL texture, you could try eglBindTexImage() and see if it works [I have not used it]
Word of warning: these functions are often not implemented in many EGL implementations.

If they are not implemented, try looking for one OpenVG/OpenGL ES extensions that do something similar. I could have sworn I’ve seen something on the topic while browsing the extension registry.

If all else fails, you’re stuck with vgReadPixels() from a regular pbuffer [but you’ll have to suffer performance penalty of reading and writing textures from main memory to GPU memory of course].

The set of EGLImage extensions provide this kind of capabilities I believe.

You can create an EGLImage from a VGImage with this extension: … _image.txt

You can then create a texture from your EGLImage with: … _image.txt

I think the extension is become popular enough that it’s possible your platform/GLES implementation may support this set of extensions.

Thanks jpilon, I went over the documentation on those extensions, and they seem like they’re just what I was looking for. Unfortunately, I’m pretty sure our device doesn’t have support for these extensions. Do you think there exist ways to convert the OpenVG context/surface into a Texture2D using just standard EGL methods?

At this time I’d also like to discuss another possibility: overlaying the OpenVG & OpenGL surfaces onto the same display. Here’s the pseudo-code:

(in creation)
create ovg surface
create ovg context
create ogl surface
create ogl context

(in main-loop)

  1. set ovg as the current context
  2. bindAPI ovg
  3. clear screen using ovg (vgClear)
  4. render ovg stuff
  5. swap buffer (display, ovg)
  6. set ogl as the current context
  7. bindAPI ogl
  8. (DO NOT clear screen, screen is left dirty intentionally)
  9. render ogl stuff
  10. swap buffer (display, ogl)

Would this work to render OpenVG & OpenGL onto the same display? The only red-flag I see here is if swapbuffer actually replaces the entire display with the surface. If that’s the case, we’d probably only see the OpenGL content… which is not the desired result.

What are your thoughts?

If all you care about is compositing the image like that, and not storing it in a texture, then the normal way to do that is:

  • create only one surface
  • use it for both ovg and ogl rendering
  • use vgFinish() instead of eglSwapBuffers when you finish the ovg rendering
  • makecurrent the ogl context with the same surface (you may need to makecurrent() with NULL parameters first, depending on implementation)
  • use the eglSwapBuffers() after finishing the ogl rendering.

Normally though, I imagine you’d probably want to render ogl first, and ovg second.

I was under the impression the surfaces couldn’t be ‘shared’ by both ogl & ovg? Are they actually share-able assuming they have the same config attributes? This would be great!

Ivo, could you explain further what you meant by this:

  • makecurrent the ogl context with the same surface (you may need to makecurrent() with NULL parameters first, depending on implementation)

What’s the reason for using eglMakeCurrent() with the NULL parameters first?


No - surfaces are sharable (think of them as just a handle to a framebuffer). When you create them, you need to make sure the config supports both ovg and ogl rendering, but I don’t see that as much of a problem for any driver which supports both ovg and ogl rendering - chances are high that all valid configs will (but it never hurts to check).

The reason for the NULL makecurrent is that some egl implementations might return errors if the surface you are making current is the same surface that’s bound to the context that you are replacing (giving some surface in use error or something). It shouldn’t happen IMHO, but I think I have run into a bug like that once in some driver or other at some point.

I think the EGL specification enforces this behavior, so it would be a bug if you didn’t get an error when trying to make current a surface that’s already bound to another context. So if you want to write portable EGL code, it’s probably safer to make current with NULL, however it’s possible doing this could have a negative impact on perforce as it could trigger some expensive state updates. So, I would say the decision depends if you’re focused on a particular platform or not.

Below is my sequence of code.

  1. create ovg context
  2. make ovg context current to the thread
  3. create ovg image object, load with image data and bind ovg image to pbuffer using eglCreatePbufferFromClientBuffer
    pbuffer = eglCreatePbufferFromClientBuffer(dsp, EGL_OPENVG_IMAGE,
    (EGLClientBuffer)ovgimage, config, attrib_list);
  4. create shared context
  5. bind ogles
  6. make shared context current to the thread
  7. Bind ovg texture (pbuffer) to ogl primitives using eglBindTexImage
    glBindTexture(GL_TEXTURE_2D, pbuffer);
    eglBindTexImage(dsplay, pbuffer, EGL_BACK_BUFFER);

In the above implementation eglBindTexImage returns EGL_BAD_MATCH. surface, display are shared and created only once.

Can anybody help me find the problem.

If in 4) and you’re creating a context, the bound api is still openvg - you’ll be creating an openVG context.
Thus your make current will bind a openvg context.

Other than that, as I said in some other post, I have not used eglBindTextImage(), so I’m afraid the EGL spec can help you more than I can there. Did you set/leave the EGL_TEXTURE_FORMAT surface attribute to EGL_NO_TEXTURE?

I just swapped 4 and 5 code, this leads to crash while creating the shared context. So i doubt if i can really share a context across client APIs !

Also calling eglmakecurrent with egl_no_surface and egl_no_context after 3) did not help!

the pbuffer is created in ovg context hence i do not touch EGL_TEXTURE_FORMAT attribute. I think EGL_TEXTURE_FORMAT is ogles attribute.

No you can’t share a context across different APIs. Internally, what a context is an instance of the current API state (everything from what paints are created and bound, to attributes like VG_LINE_WIDTH, to internal state you never see). Contexts by their very nature are API specific (GL state is very different from VG state afterall). So sharing a VG context with an OpenGL ES context is a mistake.

To render to a texture, you should be trying to transform a VGImage (OpenVG) into pbuffer (EGL), after you do this, the pbuffer should still be valid (because it’s EGL) even after the OpenVG context is made not current (and you wont be able to use the VGImage again until the pbuffer is destroyed). You should then be able to use the pbuffer as a normal rendering surface for any API (if your implementation crashes doing this, it’s a bug).