remote opengl: windows or linux?

I am not sure where this even belongs – but I (and others) have been trying to get remote OpenGL through the X server running (again). It appears that this used to work (I though it was running) but this was when Xgl was used to forward Opengl on a remote X client to my local Xserver (running on Windows) that used directX or openGL calls to use the local computer’s accelerated graphics hardware.

It seems that the last time this worked correctly for me was when Xgl was still present in xorg’s SW distribution and that the ability to do this disappeared about the same time as HW-specific DRI drivers were introduced on linux – that seemed
to talk to (or expect to talk to) local Nvidia HW on the linux computer. I.E. – NOT the Nvidia HW already present on the remote “viewstation” – an Xserver running on Windows using it’s local 3D-acceleration solution (openGL or DirectX talking
to the local HW(Nvidia GTX590).

Now, with ‘client’ programs on linux, it seems, that with Xgl removed from xorgX, the AIGLX driver on linux should somehow be forwarding the OpenGL commands to remote client – but AIGLX comes with “DRI” drivers for direct support of HW on the linux machine. The linux-client (where programs run), expects to forward display commands to a display server.

I have my display server setup with accelerated 3D graphics in the form of a Win7-64 workstation with accelerated 3D supplied by it’s NV GLX590 HW.

I don’t see how AIGLX, having replaced Xgl, forwards openGL operations to the remote accelerated X+GLX server.

I keep seeing references to using a local linux-DRI driver (that talk to 3D-drivers locally on the linux client), OR, failing that, to a local “swrast_dri” that emulates 3D SW on the client and ships frame buffers to a client – which ISN’T accelerated GLX, but is simulated by sending full frame-buffers to a remote X+openGL server for rendering.

This would seem to entirely circumvent the local-server’s accelerated 3D HW – and, overall, provide no accelerated
3D graphics.

When I run a openGL program on the linux-client, I see full-windows refreshes being sent to my local-Xserver – which
slows all other locally displayed X-clients to a crawl (updates every few-several seconds – i.e. not so great for a remotely running ‘Gvim’ or ‘xosview’ or ‘xterm’ that uses interactive, low-latency X as a ‘given’).

Some (on Xwin list) have suggested that I LD_LIBRARY_PATH to replace X-client libs on the linux-client to allow some form of GLX_ALWAYS_INDIRECT in the client’s ENV to inject 3rd-party rendering libs – normally used to forward 3D accelerated graphics to a 3rd, GPU-only, compute resource, that then ships the frames to the local-Xserver on Windows for frame-by-frame rendering – again, ignoring the Xserver’s local 3D-HW and others who claim that frame-buffer rendering on the Xserver is the only “accelerated”[sic] solution available.

On the openSuSE list, I’ve only seen others that were able to get full-frame-buffer refreshes to work that emulate the 3D operations using SW emulation on the client (but no reports of local-Xserver using local 3D-accel HW),

and on the cygwinX-server list, have been told that remote frame-buffer rendering is “it” and is not a bug and that if it doesn’t work, to submit a specific use case for debugging – missing the point of using local 3D accel to do all 3D-requiring “compositing” (transparency, blending), that many (or most) remote-desktop’s use as part of normal functionality.

Note – that Windows->Windows RemoteDesktopProtocol (RDP) clients do not support this and disable ‘aero’ effects. I.e. they disallow remote usage of the local 3D-accel abilities.

Can anyone explain how remote GLX is supposed to work these days?

Is the linux Nvidia driver suppose to handle ‘remoting’ of commands to my local NV HW?

running GLXinfo on the client displays my local NV GLX’s HW:
Ishtar:law> export LIBGL_DEBUG=true
Ishtar:law> glxinfo |&more
libGL: Can’t open configuration file /home/law/.drirc: No such file or directory.
libGL error: failed to load driver: swrast
libGL error: Try again with LIBGL_DEBUG=verbose for more details.
name of display:
display: screen: 0
direct rendering: No (If you want to find out why, try setting LIBGL_DEBUG=verbose)
server glx vendor string: SGI
server glx version string: 1.4
server glx extensions:
GLX_ARB_multisample, GLX_EXT_import_context, GLX_EXT_visual_info,
GLX_EXT_visual_rating, GLX_MESA_copy_sub_buffer, GLX_OML_swap_method,
GLX_SGIS_multisample, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer,
GLX_SGI_make_current_read, GLX_SGI_swap_control
client glx vendor string: Mesa Project and SGI
client glx version string: 1.4
client glx extensions:
GLX_ARB_create_context, GLX_ARB_create_context_profile,
GLX_ARB_create_context_robustness, GLX_ARB_fbconfig_float,
GLX_ARB_framebuffer_sRGB, GLX_ARB_get_proc_address, GLX_ARB_multisample,
GLX_EXT_create_context_es2_profile, GLX_EXT_fbconfig_packed_float,
GLX_EXT_framebuffer_sRGB, GLX_EXT_import_context,
GLX_EXT_texture_from_pixmap, GLX_EXT_visual_info, GLX_EXT_visual_rating,
GLX_INTEL_swap_event, GLX_MESA_copy_sub_buffer,
GLX_MESA_multithread_makecurrent, GLX_MESA_swap_control,
GLX_OML_swap_method, GLX_OML_sync_control, GLX_SGIS_multisample,
GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGIX_visual_select_group,
GLX_SGI_make_current_read, GLX_SGI_swap_control, GLX_SGI_video_sync
GLX version: 1.4
GLX extensions:
GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_import_context,
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_MESA_copy_sub_buffer,
GLX_MESA_multithread_makecurrent, GLX_OML_swap_method,
GLX_SGIS_multisample, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer,
GLX_SGI_make_current_read, GLX_SGI_swap_control
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: GeForce GTX 590/PCIe/SSE2
OpenGL version string: 1.4 (4.4.0)
OpenGL extensions:
GL_ARB_depth_texture, GL_ARB_draw_buffers, GL_ARB_fragment_program,
GL_ARB_fragment_program_shadow, GL_ARB_imaging, GL_ARB_multisample,
GL_ARB_multitexture, GL_ARB_occlusion_query, GL_ARB_point_parameters,
GL_ARB_point_sprite, GL_ARB_shadow, GL_ARB_texture_border_clamp,
GL_ARB_texture_compression, GL_ARB_texture_cube_map,
GL_ARB_texture_env_add, GL_ARB_texture_env_combine,
GL_ARB_texture_env_crossbar, GL_ARB_texture_env_dot3,
GL_ARB_texture_mirrored_repeat, GL_ARB_texture_non_power_of_two,
GL_ARB_texture_rectangle, GL_ARB_transpose_matrix, GL_ARB_vertex_program,
GL_ARB_window_pos, GL_ATI_draw_buffers, GL_ATI_texture_mirror_once,
GL_EXT_abgr, GL_EXT_bgra, GL_EXT_blend_color,
GL_EXT_blend_equation_separate, GL_EXT_blend_func_separate,
GL_EXT_blend_minmax, GL_EXT_blend_subtract, GL_EXT_draw_range_elements,
GL_EXT_fog_coord, GL_EXT_framebuffer_object, GL_EXT_multi_draw_arrays,
GL_EXT_packed_pixels, GL_EXT_point_parameters, GL_EXT_rescale_normal,
GL_EXT_secondary_color, GL_EXT_separate_specular_color,
GL_EXT_shadow_funcs, GL_EXT_stencil_two_side, GL_EXT_stencil_wrap,
GL_EXT_texture3D, GL_EXT_texture_compression_dxt1,
GL_EXT_texture_compression_s3tc, GL_EXT_texture_edge_clamp,
GL_EXT_texture_env_add, GL_EXT_texture_env_combine,
GL_EXT_texture_env_dot3, GL_EXT_texture_filter_anisotropic,
GL_EXT_texture_lod, GL_EXT_texture_lod_bias, GL_EXT_texture_mirror_clamp,
GL_EXT_texture_object, GL_EXT_texture_rectangle, GL_EXT_vertex_array,
GL_IBM_texture_mirrored_repeat, GL_INGR_blend_func_separate,
GL_NV_blend_square, GL_NV_depth_clamp, GL_NV_fog_distance,
GL_NV_fragment_program2, GL_NV_fragment_program_option,
GL_NV_light_max_exponent, GL_NV_multisample_filter_hint,
GL_NV_point_sprite, GL_NV_texgen_reflection,
GL_NV_texture_compression_vtc, GL_NV_texture_env_combine4,
GL_NV_texture_rectangle, GL_NV_vertex_program2_option,
GL_NV_vertex_program3, GL_SGIS_generate_mipmap,
GL_SGIS_texture_border_clamp, GL_SGIS_texture_edge_clamp,
GL_SGIS_texture_lod, GL_SGIX_depth_texture, GL_SGIX_shadow,
GL_SUN_multi_draw_arrays, GL_SUN_slice_accum

98 GLX Visuals

323 GLXFBConfigs:

I.e. glxinfo thinks it is using:
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: GeForce GTX 590/PCIe/SSE2
OpenGL version string: 1.4 (4.4.0) (OpenGL 4.4?)

Can anyone explain how this should be working now, or even where to post this question (as this really doesn’t seem to fall, “exactly”, into any of the OpenGL forum subject (it’s not really coding, but is platform non-specific – which is the opposite of how the HW forums are laid out).

HELP??? :~}