IGLX capabilities versus current OpenGL (Seeking straight dope)

Succinct as can be, who knows exactly what the requirements (limitations) of IGLX (indirect glx… probably not the same as aiglx, but probably also not necessarily unaccelerated) are?

I feel like I’ve been trying to figure this out my entire life. There are not primary documents online. Unless you are on the X.org dev group, working with the code, I think no one really knows.

My impression is if you use IGLX you are restricted to pretty much the original OpenGL feature set, before shaders, before framebuffer objects… I think. I don’t know if this is true of not. I’m pretty sure it’s so for shaders, however I think also framebuffer objects are important to offscreen rendering with indirect GLX.

I think research facilities must depend on IGLX. I find myself depending on it more and more ever since I found myself working primarily on Windows machines, where GLX only works via IGLX. I don’t know if it’s an attractive feature to end-users, but for cross-platform work, it’s nice to debug code without the inconvenience of changing computers or anything.

Perhaps there are pertinent Linux man pages. I regularly use both IGLX and DRI with different X servers on Windows. When using IGLX most features fail. The render code makes do with what works. My impression is IGLX’s time in the sun is long over. Maybe an upgrade is long overdue.

EDITED: This (https://www.khronos.org/registry/OpenGL/specs/gl/glx1.4.pdf) may be helpful? I found closing out browser tabs just now.

P.S. I’m interested in “render-to-texture” and “multi-render-target” and “instancing” effects, more so than shaders. These are fixed-function effects, that I don’t know if are compatible with IGLX or not. It’s pretty inconvenient to make do without these. Maybe different servers (drivers?) have different degrees of support. Or they are just extensions that need to be queried from an OpenGL perspective. You can see from my choice of terminology I’m more familiar with Direct3D than OpenGL. I think maybe IGLX is frozen at a point before these technologies emerged.

It’s unclear, but are you talking about “GLX protocol”, which defines “GLX opcodes”, and which provides very basic support for remote rendering of old GL apps?

If so, I’ve never heard it called what you called it. Search the forums for the terms I mentioned and you’ll find more info on this.

In short, it’s probably not a great idea to depend on this feature as it hasn’t been extended to support modern OpenGL. Consider instead using something like a VNC.

The other meaning of GLX that I’m aware of refers to an API / window-system integration layer allowing OpenGL to work within the X window system (on UNIX/Linux typically), just as WGL provides this on MS Windows, and AGL provided this for some Apple platforms. However, this doesn’t seem to fit with what you’re talking about, since the GLX API doesn’t limit the OpenGL feature set available to the user.

This was puzzling to me. What do you mean here?

He’s referring to the GLX protocol, i.e. indirect rendering.

It’s basically dead at this point. The most recent version of the GLX specification (1.4) is dated December 2005, and there’s no indication there will ever be another version. Many of the recent additions would be impossible to fit into a client-server architecture. E.g. ARB_vertex_buffer_object specifies the GLX protocol for a MapBuffer request as returning a pointer to the mapped memory. Clearly that isn’t going to work if client and server are on different systems.

At this point, even X11 itself is on the endangered list. There’s plenty of support for abandoning it in favour of a simpler PC-style graphics architecture.

EDITED: WSL is gaining notoriety on Windows. It depends on IGLX. Cygwin’s X Server can do DRI for Cygwin apps, but not for WSL apps, since they are running purely over network. I think it’s a useful technology. Some obscure OpenGL features may be incompatible, but most OpenGL features are analogous to textures, and would work just as well as textures and vertex buffers. Transporting a shader over a network is certainly not infeasible.

You might also note that xorg is progressively dying. Most Linux distributions, most desktops and Window Managers are migrating to Wayland. So you might be interested in looking to that way instead of xorg (I actually don’t know if Wayland is able to do indirect rendering though). There is also a XWayland project that is adding a glx layer but AFAIK this is not optimal and has some major bugs.
The Wayland transition started about 10 years ago, but it is accelerating since few years. If I’m not wrong, Gnome uses Wayland by default on major latest distribution releases.

I don’t know. In the research center I worked we simply did OpenGL on the local machine in order to use all what the hardware can give. But this could depend on the kind of research of course. What exactly are you trying to do ?
Nowadays it costs less to buy several desktops or laptops with decent hardware than brute-force machines with several high-end graphic cards to be used as (cluster) servers (we just faced this issue again in my current company). If you need cluster rendering, then having different machines rendering different portions of the screen, then sending this to the final display through network is certainly the best thing to do at this time.

This article is more than 10 years old.

This is not the same. I don’t see Microsoft spending money to support dying Linux architectures. They did not even wanted to support a GL more recent than 1.1 since more than 20 years. So to support and revive indirect glx…

Also note that other protocols are also able to do OpenGL remotely, like RDP or X11 ssh tunneling.

@Silence, much of what you say here seems off-topic to me. As for Wayland, prognosticating and actually using APIs is two different things. There is less information about Wayland on the WWW than IGLX. I’m grateful to primarily work on Windows. At least we have some reference materials. I’ve looked into Wayland, but it’s one of those things that has to be installed, and from what I’ve read there is a layer that translates X to Wayland. I don’t know how it works, but there is no way to use it with Windows I bet.

I work primarily in graphics, so most projects I’m involved with have a graphical element. With IGLX I can at least run software built with GCC to confirm it works. That’s the boat I’m in. I expect many others are also. There’s a lot of work happening in X.org’s collaborative development servers.

This article is more than 10 years old.

That’s not why the link is there, but 10yrs is not a long time. In 10yrs very little has changed.

did OpenGL on the local machine in order to use all what the hardware can give.

EDITED: FWIW this is the principle behind IGLX. There is another thing called “AIGLX” that is I believe just a euphemism that really confuses the subject, because the A stands for “accelerated” which long made me sense that IGLX was unaccelerated. But they don’t seem to have a relationship. I think AIGLX is a hack (an accidental technique) used to do frame-buffer effects on Linux compositors, that actually isn’t remote and is only indirect in the sense that it takes over the indirect functionality in order to generate an image that a GLX extension can convert into a texture that the compositor can then manipulate.

P.S. Believe me, I’ve looked into use OpenGL ES (2) with Windows for a long time. Wayland uses EGL. The OpenGL world is very fractured, and you have Vulkan too. It’s very hard to write code that will work in different environments. What I’ve found is doing cross-platform projects, using a limited subset of OpenGL is easiest. I’ve used ES with WebGL, and like it. I wish it was made widely available so there could be some kind of convergence. Everything in OpenGL is couched as an “extension” that makes the whole thing very hard to navigate. At least with big events like ES 2 a lot of the extension stuff gets wiped away. That’s what I wish Khronos would advocate for. But it never happens like that. It’s all too confusing to deal with.

Your topic isn’t that clear and spreads in many directions. So I might have been. I just start to understand what you are looking for (see below). Plus you are talking about things that are old and almost no-more used. So what could you expect ?

AIGLX was founded because some old parts of Xfree86 / glx was using closed source. It has been merged with Xorg maybe 10 years ago.

So what you are primarily looking for is a mean to test that your program wrote in Windows also run on Linux. AFAIK virtual machines can allow acceleration (see VMWare or VirtualBox for example). If, for some reasons, that doesn’t work, you can use Mesa in software mode. Dual boot is another cheap option. The latter will avoid you to fight against the time.

Sure. XFree86 is still alive after all.

I believe you are strongly focusing on a thing since all that time. But everything is changing. And in computer graphics, more quickly than what you are thinking. 10 years ago, no one knew about Vulkan, even not about GLnext or that Apple will stop to support OpenGL. Khronos manages OpenGL since 13 years, no more. 10 years ago, no one would believe to run a game with real-time ray-tracing neither. Finally 10 years ago, indirect rendering was already an ancient feature.

Originally, OpenGL support on Linux was through Mesa, which was a library which performed 3D rendering in software. The rendered image was displayed in the window using XPutImage. When the first 3D accelerators came out, support was added to Mesa to use the hardware for 3D rendering. But still this didn’t involve the X server beyond arbitrating access to the video hardware and the final blit. And it required that the client was running on the system with the video hardware.

AIGLX was a project initiated by RedHat to move the OpenGL implementation into the X server, allowing rendering to be accelerated even when the client wasn’t running on the same system as the X server (indirect rendering). The AIGLX name largely stopped being used when it stopped being a self-contained project and became part of the core X.org code base.

Prior to AIGLX, X servers didn’t generally support the GLX extension. There was a non-accelerated GLX implementation (Utah GLX) but it wasn’t widely used. If you didn’t need network transparency, the client-side implementation provided by Mesa was sufficient. If you wanted an OpenGL-capable X terminal, you probably used a SGI system.

The question is what are the capabilities of IGLX? I.e. a list of what OpenGL APIs/features it fulfills, either by rule, or by happenstance on different systems. The topic is looking for hard facts from users/developers with close knowledge of the technologies.

This is how I saw it, but lately I’ve done more digging and either the sources I came across are misspoken (which I can readily sympathize with given the opaqueness of the materials) or there may be common misconceptions, born out of historical accidents and poor choices for project names, or just a general lack of transparency on the part of the authors.

I tend to think that IGLX always did forward OpenGL APIs to the server, short of when that was impossible. I find it difficult to imagine that “indirect rendering” as a term of art would just mean transporting a pre-rendered frame buffer.

I don’t know if the Wikipedia articles are thorough or not, but as an end-user I come away from them without a clear cut understanding of the bits and pieces.

Direct rendering is when there’s communication between the client and the driver/hardware which bypasses the connection to X server. Indirect rendering is when the only communication is through GLX protocol over the X connection (i.e. the socket created by XOpenDisplay). If the client and X server are on different systems, only indirect rendering is available. If they’re on the same system, either direct or indirect rendering can be used. Direct rendering is requested by passing True as the last parameter to glXCreateContext.

Client-side rendering was just something Mesa did when it was a stand-alone library, before it became a core part of Xorg.

It’s not that simple, obviously. The indirect mode does not have full OpenGL. So I don’t think writing paragraphs from the GLX manual gets us anywhere here.

EDITED: FWIW, the simplest way to look at it is (maybe) IGLX is hard deprecated after X version of GLX or OpenGL one. But if so, that’s why I ask, since I don’t know.

On a Linux box with Xorg installed as the X server:

Here, this file is contained in the xorgproto-devel package.

It defines the opcodes and structures used to pass various GLX and OpenGL API call requests and replies, as well as the X protocol structures in which these various requests and replies are passed (GLXRender, GLXRenderLarge, etc. (see the GLX spec for refs).

For GLX protocol specifications, see the GLX Protocol section in the various OpenGL extensions specifications. For example:

It’s also worth mentioning that some GPU vendor’s have extensions to the GLX Protocol which can be enabled to expand the GL functionality supportable via GLX Protocol for X indirect rendering. NVidia is one of those. For details, see the “Unofficial GLX” references in the Linux NVidia GL driver’s README.txt.

To obtain, dowload the Linux NVidia GL driver, extract with: sh *.run --extract-only, and then view the README.txt file. Some excerpts:

NVIDIA Accelerated Linux Graphics Driver README and Installation Guied

    NVIDIA Corporation
    Last Updated: Sun Jul 21 04:54:59 CDT 2019
    Most Recent Driver Version: 430.40

...

11H. USING UNOFFICIAL GLX PROTOCOL

By default, the NVIDIA GLX implementation will not expose GLX protocol for GL
commands if the protocol is not considered complete. Protocol could be
considered incomplete for a number of reasons. The implementation could still
be under development and contain known bugs, or the protocol specification
itself could be under development or going through review. If users would like
to test the client-side portion of such protocol when using indirect
rendering, they can set the __GL_ALLOW_UNOFFICIAL_PROTOCOL environment
variable to a non-zero value before starting their GLX application. When an
NVIDIA GLX server is used, the related X Config option
"AllowUnofficialGLXProtocol" will need to be set as well to enable support in
the server.

...

Option "AllowUnofficialGLXProtocol" "boolean"

    By default, the NVIDIA GLX implementation will not expose GLX protocol for
    GL commands if the protocol is not considered complete. Protocol could be
    considered incomplete for a number of reasons. The implementation could
    still be under development and contain known bugs, or the protocol
    specification itself could be under development or going through review.
    If users would like to test the server-side portion of such protocol when
    using indirect rendering, they can enable this option. If any X screen
    enables this option, it will enable protocol on all screens in the server.

    When an NVIDIA GLX client is used, the related environment variable
    "__GL_ALLOW_UNOFFICIAL_PROTOCOL" will need to be set as well to enable
    support in the client.


...Unofficial GLX protocol support exists in NVIDIA's GLX client and GLX server
implementations for the following OpenGL extensions:

   o GL_ARB_geometry_shader4

   o GL_ARB_shader_objects

   o GL_ARB_texture_buffer_object

   o GL_ARB_vertex_buffer_object

   o GL_ARB_vertex_shader

   o GL_EXT_bindable_uniform

   o GL_EXT_compiled_vertex_array

   o GL_EXT_geometry_shader4

   o GL_EXT_gpu_shader4

   o GL_EXT_texture_buffer_object

   o GL_NV_geometry_program4

   o GL_NV_vertex_program

   o GL_NV_parameter_buffer_object

   o GL_NV_vertex_program4

Until the GLX protocol for these OpenGL extensions is finalized, using these
extensions through GLX indirect rendering will require the
AllowUnofficialGLXProtocol X configuration option, and the
__GL_ALLOW_UNOFFICIAL_PROTOCOL environment variable in the environment of the
client application. Unofficial protocol requires the use of NVIDIA GLX
libraries on both the client and the server. Note: GLX protocol is used when
an OpenGL application indirect renders (i.e., runs on one computer, but
submits protocol requests such that the rendering is performed on another
computer). The above OpenGL extensions are fully supported when doing direct
rendering.

GLX visuals and FBConfigs are only available for X screens with depths 16, 24,
or 30.

Also, apologies for not immediately picking up on your “indirect GLX” reference. I’ve never seen it referred to as “IGLX”; more commonly remote or indirect X rendering, or GLX protocol.

When speaking of GL indirect rendering via GLX Protocol, the former is correct. The latter is more what a VNC client does (such as RealVNC or TightVNC). For a list of some of these, free and otherwise, see: Comparison of remote desktop software (wikipedia)

Thanks for the materials @Dark_Photon, I look forward to them. I don’t believe I ever seen an extension spec with a section like that, meaning likely only older extensions (taken for granted) have them.

Also, apologies for not immediately picking up on your “indirect GLX” reference. I’ve never seen it referred to as “IGLX”; more commonly remote or indirect X rendering, or GLX protocol.

FWIW, IGLX is how you tell the X Server on the command line to enable IGLX: +iglx. And I did write “indirect glx” to be plain :wink: I’m glad you’re familiar. It seemed unfamiliar to you. I think it is a useful technology. I’ve thought about replacing it, by adding similar functionality to code that has back-ends to various graphics-libraries like OpenGL. I wish it were supported to its fullest, by continuing to enable what is possible with new codes. Perhaps that’s what Nvidia has done. Nvidia has always been good to Linux. Unfortunately, cross-platform code falls apart when things only work on so many systems with so many devices.

When speaking of GL indirect rendering via GLX Protocol, the former is correct. The latter is more what a VNC client does (such as RealVNC or TightVNC). For a list of some of these, free and otherwise, see: Comparison of remote desktop software (wikipedia)

EDITED: I think possibly the distinction is GLX may have been developed for Silicon Graphics workstations (for example, if not limited to) whereas Mesa (Linux?) implemented a low-budget model on the client side? I don’t know. The world of non-consumer graphics is, perhaps, even more obscure.

https://www.khronos.org/registry/OpenGL/extensions/ARB/ARB_framebuffer_object.txt

There is a GLX Protocol section for framebuffer_object. The text file is huge though. It gives me some hope. If it doesn’t work on Windows, I suppose it’s just on the X Server.

I think this is what enables “render targets”, i.e. render-to-texture.

/* 310. GL_EXT_framebuffer_object */
#define X_GLvop_IsRenderbufferEXT                      1422
#define X_GLvop_GenRenderbuffersEXT                    1423
#define X_GLvop_GetRenderbufferParameterivEXT          1424
#define X_GLvop_IsFramebufferEXT                       1425
#define X_GLvop_GenFramebuffersEXT                     1426
#define X_GLvop_CheckFramebufferStatusEXT              1427
#define X_GLvop_GetFramebufferAttachmentParameterivEXT 1428

EDITED: The code for these is probably on the very bottom of glxproto.h - xorg/proto/glproto - X.org GLProto protocol headers. (mirrored from https://gitlab.freedesktop.org/xorg/proto/glproto) … it looks limited. I think the text file says these are sent apart from GLXRender or GLXRenderLarge. Maybe the others are defined elsewhere.

Close, but not quite. Those actually appear to be just the vendor-specific opcodes for query-related operations, and even then for EXT_framebuffer_object, not the ARB_framebuffer_object or core versions (though there’s clearly some sharing going on between the ARB and EXT opcodes).

However, the “GLX Protocol” section in the ARB and EXT extensions specs lists all the opcodes and packet formats required. It just appears that the glxproto.h hasn’t been updated to collect all of these. So it doesn’t reflect the latest.

Excerpt from the ARB extension’s “GLX Protocol” section:

For the record, found out today Vulkan drivers don’t exist for my Intel chipset. I don’t know if it’s designed to be written directly against by application code. If so it’s realistically a decade away from being ripe for use by everyday developers with a desire to satisfy as many users as can be.

EDITED: Sorry to bump. I thought nested replies would not be added to the end of the topic. I’m learning.

I’m not sure what Vulkan would be designed to be written against if not “application code”. Low-level programming isn’t for everyone, and many users will just us an engine that itself uses Vulkan. But why isn’t that engine “application code”?

And to be honest, I would not expect any OpenGL code to work on any hardware not still supported by its IHV. Especially for Intel drivers, whose bugs are legion. So if I were developing “with a desire to satisfy as many users as can be,” I would consider “as can be” to exclude those whose hardware is no longer supported. There’s simply no other way to develop an OpenGL application that you expect to be widely deployed.

OpenGL is not, for practical purposes, a “write one, run anywhere” API.

This is off-topic, but just FWIW my meaning by “application code” is that an application is written with Vulkan APIs. If it uses more portable middle-ware that hides Vulkan as an implementation detail, then that is NOT writing against Vulkan at application level.

OpenGL is not, for practical purposes, a “write one, run anywhere” API.

Well, I think if a portable C application can’t reasonably be built with OpenGL then it should be relegated to back-ends. But I think many do write code with OpenGL that is relatively portable, at least on desktop. But it’s really too difficult. It should be more straightforward. If it’s too burdensome to use it won’t be used. I mean that in the sense that that is a hard law of physics that stands between it and user adoption and relevance to vendors. That can explain why Intel may neglect it.