What is the difference between a VGImage and an EGLSurface? Or to ask a bit differently: What IS an EGLSurface? The OpenVG spec defines a VGImages as “a rectangular collection of pixels”, but that is true for surfaces too, isn’t it? The EGL spec just lists some examples for EGLSurfaces, but doesn’t define what it IS.
There are things you can do with images that you can’t do with surfaces. For example, you can use an image as surface, but not vice versa. Why is that?
An EGLSurface is part of EGL. Because it’s part of EGL, it may or may not be used by other APIs like OpenGL ES, Open GL ES2 or most recently Open GL (as well as OpenVG). A surface is something that can be rendered into. Depending on how it was created, it may be able to display itself onto the screen/where-ever when you call eglSwapBuffers(). As such, surfaces are linked to the display that the driver is designed to run on (displays often support formats that OpenVG does not - like RGB332, luminance-alpha, or 24-bit RGB [so those will be the formats allowed by EGL in the EGLconfig]).
An VGImage is part of the OpenVG Spec (it’s not defined outside OpenVG). The only way you can render into it is if the EGL implementation supports the eglCreatePbufferFromClientBuffer() call and bind a surface to it. Otherwise VGImages cannot be rendered into (i.e. vgDrawPath()) and can only be rendered onto the drawing surface (or used as a paint for rendering onto the drawing surface). They can never be rendered direct to the screen (only to a surface which is then displayed on the screen).
Ultimately, at the most basic level, both surfaces and images are just a series of pixels, so some implentations may have similar implementations for both under the hood, but that’s driver specific. Keep in mind though, that even then there are differences however. Surfaces may need to support things like GL’s mip-mapping, and VGImages need to support things like child images, have texture sampling and color conversion functions, etc.