You don’t share the context. You give each window a separate context then share the data (textures, shaders, buffers, etc) between contexts, by setting the [var]shareList[/var] parameter to glXCreateContext() to an existing context (except for the first one).
When a group of contexts share data, objects created by glCreateShader, glCreateProgram, glGenTextures, or glGenBuffers are shared across all contexts in the group. This doesn’t apply to all object types, only those with what OpenGL considers “data” (as opposed to “state”). “Container” objects (e.g. VAOs and FBOs) aren’t themselves shared, even if the contained objects (VBOs and textures/renderbuffers) are. When an object is shared, both its data and state are shared; e.g. for a texture, both the pixel arrays (glTexImage) and the parameters (glTexParameter) are shared.
However: it’s unspecified whether you can share data between contexts which refer to different screens. If all screens are driven by the same video card, there’s no fundamental reason why that can’t work (if it doesn’t, you may be able to use Xinerama/TwinView/etc to merge multiple monitors into a single X “screen”). If different screens use different video cards, then it probably won’t work.
The most flexible solution is to allow for either case. When creating a context, try to share data with the existing contexts, but allow for the case where you can’t. Windows would be collated into groups whose contexts share data, and each shader, texture, buffer, etc would be uploaded to each group.
All the screens are driven by the same video card, so that shouldn’t be a problem. I forced the system to create the screens separate because I didn’t want to worry about a technician changing the resolution of one screen. (When they are all on a single screen, I have to calculate the [x,y] of the upper left corner for the create window function. With separate screens, I just open a window at [0,0] of the screen.)
I integrated the more efficient text shader code into my load, and it works.
It seems I must call glUseProgram every sycle before I can call render_text. Does the shader somehow get lost if I do regular glBegin/glEnd blocks? If I move the glUseProgram up a level (try to only call it once), and execute a glBegin/glEnd block, the screen will not draw at all. It says it will be insstalled in the current rendering state. Does that not span buffer swaps?
The current program shouldn’t change other than by calls to glUseProgram(). But the process of rendering a complete frame typically requires multiple programs, so it’s normal to call that function prior to the rendering operations which use the program. It’s sufficiently uncommon to leave a single program active across multiple frames that if it was getting reverted by e.g. a buffer swap (it shouldn’t be), the chances are that no-one would have noticed.
Roughly speaking, for a program which displays a single “scene”, objects such as programs, textures, buffers, etc are created once at startup, then “activated” (glUseProgram, glBindTexture, etc) at appropriate points during the process of rendering a frame.
Also, it’s quite common to revert such state changes (e.g. glUseProgram(0) etc) before the end of the function which made them, in order to avoid potentially-confusing consequences from “left over” settings. OpenGL has a lot of state, and there’s a tendency to assume that any state is at its default value unless you can actually see a function which changes it.