Complains regarding GLSL limitations

I haven’t gotten that deep into GLSL but…

How about the ability to work with multiple vertices ( lines, triangles ) at one time? I’ve essentially been doing ‘vertex programs’ for a few years while doing space weather vis. It is necessary to render the data using a variety of projections. But it is not enough to merely transform individual points ( which GLSL does ) but also necessary to clip primitives ( lines, triangles ) and render the different parts separately. But this clipping will be different depending upon the projection used. It would be great to move this to the graphics hw and free up the CPU as much as possible for the scientific computing.

The only geometry clipping that occurs at all is clipping by planes which you can change by using states (I’m not even sure that it actually clips the geometry, it might just reject outside pixels or something…).

Dealing with multiple vertices at once won’t be here for a while (topology processing, and even then it’s not guaranteed to be GPU side). It is unrealistic to ask for something that no graphics card can do.

sure you can use another texture (if available) though its not as convient eg for a rgba trexture u use the one texture + not 4x textures. i also believe working with vertices aint flexible anuf , eg wouldnt it be gret to supply a single vertice (as a base point) + then from there let the vertice program create a whole bunch of vertices. these are a couple of real limitations that ive run into over the last month (ok the hardware at present mightnt be capable of it but its gotta be forward thinking) the days of using graphics cards as plain old 3drenderers are over.

Originally posted by Adruab:
It is unrealistic to ask for something that no graphics card can do.
Then nothing new would ever happen!

Seriously - and I’m no low level OGL implementor - how hard would it be to have, on the CPU side, a call like ‘glBegin( GL_VERTEX_PROGRAM )’ followed by all the usual calls to glVertex*, glTex* etc. Everything is transformed as usual down to the vertex program. By default, when using GL_VERTEX_PROGRAM, nothing would happen. The geometry would just fall off the end of the Earth. But if there is a vertex program loaded allow the vertex program to call glBegin/End to define the primitive type. And then you’d need a new call - at the vertex program level - say ‘glShaderVertex*’. This would be kinda sorta like the glVertex* we know and love but would send the geometry ( texcoord yadda-yadda ) directly to the shader iself.

It seems to this uninformed brain that somthing like this should be doable on mondern HW. No?

Originally posted by Foxbat:
[b] [quote]Originally posted by Adruab:
It is unrealistic to ask for something that no graphics card can do.
Then nothing new would ever happen!

It seems to this uninformed brain that somthing like this should be doable on mondern HW. No?[/b][/QUOTE]Of course we all need to have “visions” about what we would like to have, that´s true.

But on current hardware your demands are impossible to satisfy. The pipeline is strictly “one in - one out”. If fp´s it can also be “one in - none out”.
But that´s it. We all have to live with it.

Jan.

Originally posted by Jan:
float c = a.xyz * b.xyz;

This will do a dot3, instead of a dot4.

correct me if i am wrong, but your c will only be a.x * b.x (if this code does work), but not the dot3 of a and b, because vec3 multiplication is componentwise.

[quote]Originally posted by Chris Lux:
[b]

float c = dot(a.xyz, b.xyz);

Well, i thought about operator-overloading, but i might have missed something.

Anyway, the point was, to show, that a dot3 is possible with 4-component vectors, without modifications of glSlang.

Jan.

Heheh, yes I have no problem with impelementing a level above the GPU on the CPU in order to more flexible vertex operations. However, the question was focused on offloading the CPU. That would certainly not occur unless the operation was GPU accelerated.

As for GPU accelerating these things… right now the GPU is very pipelined and optimized for stream processing. This totally breaks if you start wanting random access for everything in the mesh (I believe it will be a while until full topology processing is fully accelerated on GPU hardware). Simple stuff, like generating triangles from points still fit the bill (other simple examples). Anything too expensive and it will likely break the pipeline’s efficiency.

I might be misinterpreting something but that’s what it sounds like you want to do. Cheer up though :slight_smile: . A topology processor of some sort is planned for the next revision of DirectX, which means it will likely be added to OpenGL too in some form or another. The real question is if it will be completely GPU based, or only partly…

Originally posted by Chris Lux:
correct me if i am wrong, but your c will only be a.x * b.x (if this code does work), but not the dot3 of a and b, because vec3 multiplication is componentwise.

float a = a.xyz * b.xyz;

Shouldn’t even compile.
The Lvalue is a float, the Rvalue is a vec3.
But you are correct that it isn’t a dot. It’s a per component multiply.

The user-defined functions to do a dot3 are:

float dot3( vec4 x, vec4 y ) { return dot( x.xyz, y.xyz ); }
float dot3( vec3 x, vec3 y ) { return dot( x, y ); }

-mr. bill

I think gl_ModelMatrix would be useful (for cubemap lookups in environment mapping for example)…

GLSL 1.1 is out! :wink:
http://oss.sgi.com/projects/ogl-sample/registry/ARB/GLSLangSpec.Full.1.10.59.pdf

You beat me to it :slight_smile: . Hooray for extension semantics! :slight_smile:

hi,

looking at the new version of glsl specs and i’m always stuck on section 4.1.9, last line :
“There is no mechanism for initializing arrays at declaration time from within a shader.”

is there a trick to avoid the ugly array initialization:
array[0] = 1.0; array[1] = 0.5; array[2] = 0.2; …

if not, consider that as a complain :wink:

Collecting several responses here…

  • I am interested in community feedback for the next version of the language. Version 1.1 reflects some of this over version 1.0. Not all features were included in version 1.1 though, so that a quality standard could be delivered on schedule.

  • Arrays should become first class objects. This will bring initializers and other array features.

  • If you want to initialize an array in a shader, consider whether you can do something with a uniform array initialized by the application instead. It may be higher performing as well.

  • If you want

    uniform mat4 u;
    u.inverse;
    or
    inverse(u);

also consider doing this in the application. If did this in the shader, then you’d also better hoping the compiler notices it’s the same inverse each time and moves the work to the application side at uniform load time anyway. I’m sure compilers will eventually get that smart.:slight_smile:

  • An extension mechanism “#extension” is available now.

  • Future GL may virtualize texture objects and avoid much of the complexity of manually mapping them to units. If this happens, the API will bind texture objects to samplers instead of units. The language will remain as is.

JohnK

There isn’t a new glGetUniformLocation for samplers?

And since in the API, textures are treated as beeing bound to units (image units), and in the GLSL shader, there is no indication of which texture should be bound to which unit, it can be a problem.

In my case, I scan the shader for “sampler1D” “sampler2D” “sampler3D” “samplerCube” and assign texture unit number in order in which the sampler declarations are made.

uniform sampler2D This; // unit 0
uniform samplerCube That; // unit 1
uniform sampler3D TheOther; // unit 2

And in case the user wants all three to be bound to the same units, then that’s a problem.

It would be simpler if GL was changed not to allow this. What’s the point of this if one can’t access all three.

Originally posted by V-man:
[b]
uniform sampler2D This; // unit 0
uniform samplerCube That; // unit 1
uniform sampler3D TheOther; // unit 2

And in case the user wants all three to be bound to the same units, then that’s a problem.

It would be simpler if GL was changed not to allow this. What’s the point of this if one can’t access all three.[/b]
The fundamental behavior of how textures are bound to texture targets on a texture image unit has not changed. Before GLSL came along you were not able to access textures bound to different targets on the same texture image unit at the same time. You have to disable one target and enable another. If multiple targets are enabled, the “highest” enabled one wins. In GLSL the texture enable state is ignored. Instead, you use samplers of different types (sampler2D etc) to select the texture target. What hasn’t changed is that multiple targets cannot be active at the same time.

I don’t think this is such a big deal anymore, current hardware supports 32 texture image units. If you need access to textures of different dimensionality, just use another texture image unit.

Barthold

current hardware supports 32 texture image units.
Whose current hardware?

As I was looking through the new glslang spec, I noticed the following line:

The goal of this work is a forward looking hardware independent high-level language that is easy to use and powerful enough to stand the test of time and drastically reduce the need for extensions.
Anyone get the idea that the ARB failed miserably at this? The language isn’t much more forward looking than Cg, and it’s more complicated and less easy to use (partially due to bonehead language decisions that go against established C/C++).

In GLSL the texture enable state is ignored. Instead, you use samplers of different types (sampler2D etc) to select the texture target. What hasn’t changed is that multiple targets cannot be active at the same time.

I don’t think this is such a big deal anymore, current hardware supports 32 texture image units. If you need access to textures of different dimensionality, just use another texture image unit.

Barthold
I think this is GL’s fault (1.0 or 1.1) for allowing multiple texture types to be bound to the same unit. It’s never been useful.

Unless if it is useful for accessing textures from vertex shader. What happens when one tries to access a 2D texture from VS and a 1D in FS from the same unit? Is it possible on the fancy NV40?

But having a function that returns a list of sampler names and location, that would be useful.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.