I guess I should be able to - as an example - render to a texture with a well-defined format and use the pixels as vertex attributes without the need to copy them.
I’ll ignore the question of why you would even want to do that these days with transform feedback, image load/store, and SSBOs available. So instead I’ll focus on how that would work.
OpenGL has no concept of a “well-defined format”. It has formats of particular pixel sizes. But that says nothing about the important questions of swizzling, internal storage row alignment, and so forth. So you want to now expand image formats into being able to answer and control these questions? How would that work? And what about hardware that can’t implement certain combinations of stuff?
But seeing APIs like OpenCL being widely available it simply cannot be that those things would not be possible to do.
I admit that I’m not exactly up on OpenCL, but I’m pretty sure that OpenCL images can’t do what you’re wanting either. The OpenCL concept of buffers is different from image buffers, just like the OpenGL concept of buffer objects is different from textures. You can’t shove an image buffer in OpenCL when a buffer pointer is expected, and vice versa.
So I’m not seeing your point.
As a developer using an API one can wonder about the obvious absence of basic functionality or one does not
The ability to pretend that an image is a buffer object is not “basic functionality” by any reasonable definition of that term.
Again the question is if the API should be designed in a way that ensures optimal Performance at the cost of usability.
Usability is in the eye of the beholder. And not being able to use textures for sources of vertex data is hardly limiting in terms of usability. And yes, a well-designed performance API should be designed in a way that prevents you from doing things that lower performance needlessly.
Also, this conversation is very confusing. We’ve gone from a fairly simple, not-entirely-unreasonable request to be able to read parts of images back to nonsense like binding images as buffer objects and handing OpenGL random pointers that it’s expected to use as buffers and images. These things have nothing to do with one another.