I want to use opengl as a renderer,
on a pc without X (i.e a cluster).
Anyone know of any toolkits/librarys for this?
It would obviously have to render to a “virtual”
screen in memory. Normal off-screen rendering is
not good enough because I’ll still need X installed.
It might sound crazy but thats what I wan’t to do.
At least I need some documentation on writing a
“wrapper” for a window system, Then I can write
That’s interresting. But unfortunately, I don’t know, and I think there isn’t anything for what you want. But maybe, and hopefully, doing opengl on the framebuffer would be nice.
I don’t know if current opengl releases (Mesa and so) can afford that, even if you write yourself a port to glx but that works on the framebuffer instead on X.
But I would also be interrested. So if anyone has good information about such a project, let us know !
Hope that’s interresting.
the problem is: for executing opengl calls, you need a current gl context, which means you also need an open display and a window.
before we go on discussing in the wrong direction: why don’t you just install X on your cluster pcs?
may be this help:
On cluster, you can use a distributed OpenGL susbsytem. Ok, anyway you need to X11 to be installed.
Just check Chromium project for more information.
PS looks like ASGA(similar to Chromium project, which I was working few years ago) is alrady dead
One more: I’m not sure, but looks like a Sony PlayStation2 Linux distribution work with accelarated OpenGL from a console mode(not from X11).
Thanks I’ll check those out,
The framebuffer idea got me thinking…
I know gtk apps can be made to run on the
framebuffer (although I’m not entirely sure how?)
And with GtkGLExt (http://gtkglext.sourceforge.net/)
you can use OpenGL in your gtk apps - in theory you
could then run OpenGL on the framebuffer.
Has anybody tried this before?
It’s still not the real answer though,
ideally (with a “virtual screen”) you would be able
to specify outsize resolutions and produce
directly rendered 3 mega-pixel pics.
One last point - I haven’t actually built a
cluster yet - it’s in the plan - so installing X
is not defintely out, I’d just like to keep things
gtkglarea and gtkglext (if I’m not wrong for the second) won’t allow you to do that. But they allow gtk programs to have a window which could be used for gl renderings. They can be of help for a modeler for example.
If you like to keep things simple, just use X. Writting a glframebuffer compatible layer might be a hard stuff and might never end. I never heard about such project thought.
Ok I guess I’m going to stick with X (for now at least).
But perhaps does anyone know (even from X) how to render to a output larger than your display res?
Can normal offscreen rendering do this?
I can’t believe no one here has heard of this.
Now, it’s completely software rendered. But it
allows you to “do opengl without X”. The maximum
size of the drawing region is hard coded but easy
to change after modifying header file constants
and recompiling. I think they plan on removing
the limit in a future release.
OSMesa is exactly what you want. We are using it
in one of our imaging applications.
about your rendering to a region larger than the display res.
I don’t think this is possibly, even off-screen. I think the rendering must be tiled, and then read back from the graphics buffer after each tile is rendered.
but it is possible with osmesa, and this is what I
was trying to get at when I was explaining that the
maximum drawable region was hard coded but changeable which requires a recompile.
Thanks a lot!
OSMesa is perfect!
I was already planning on using Mesa, as I wanted to use software rendering (i.e. so I dont need expensive graphic cards).
I checked on www.mesa3d.org and it seems I just need to recompile Mesa with: make linux-osmesa32 - easy as that!
The current effort for hardware-accelerated OpenGL is called Mesa-solo. The intention is to have X on top of OpenGL, so as a side effect you can run any other GL program instead of X.
May be apart from OSMesa, take a look at p-buffer. Google it.
To create a p-buffer you need a rendering context. To get that one he will need Mesa-solo or osmesa anyway since he wants to avoid X. AFAIK DirectFBGL is no longer maintained and it works only on Matrox cards.
I do not think you can do anything like that without
a GL context… So something more than a terminal is
Also, I do not really understand what you mean by
“terminal” ! If you want to get really crazy, you
could render to a buffer and use the ASCII Art library
to dump it to a terminal, and I mean a “dumb” terminal.
There was an extension to Quake that I ran across a
long time agi, which enabled you to play Quake on
Linux with aalib, but you had to have Quake running
on a console to get anywhere. It was nothing really
great, more or a mental exercise for the programmer.
This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.