GLIntercept 0.4 = Shader edit/ free camera + more

The output really looks pretty cool. But I don’t develop on Windows…

Is anybody working on a Linux port? I see it on the list, but pretty far down. :wink:

LOL, I thought this thread was new!

Anyway, very cool programs. I ran it and found out that GL_VERTEX_WEIGHT_ARRAY_EXT is an invalid enum…dunno why, it worked when I used to use it.

The FreeCam doesn’t work however. I place the opengl.dll and freecam.ini(with the appropriate name of course) into my exe folder and run my prgram, but CTRL+SHIFT+C doesn’t activate the Cam. The shader editor works with CTRL+SHIFT+S, so I don’t know why the freecam wouldn’t go.

Aaron

I edited the freecam.ini and the freecam.dll part was remmed. I un-remmed it and then it worked fine.

Very cool program. I will use it much :slight_smile: .

Aaron

harsman:

  1. Does the shader say it compiled correctly? (by default the 3Dlabs validator is run first and then the compiler, so ensure to scroll the output window)

  2. What happens if you try to “revert” a recompiled shader? (select the shader and hit the revert button)

  3. Is the program a fragment+vertex, fragment only, vertex only?

  4. Are there any errors in the gliLog.txt

  5. Most importantly: What card/driver are you using? (There were some driver bugs on ATI with GLSL last time I tried - Nvidia seems OK and I have not tested 3Dlabs or others)

weatx:

GL_VERTEX_WEIGHT_ARRAY_EXT is probably invalid as the vertex weight extension has probably been dropped from later drivers.

dirk:
I am committed to a linux port in the next version. The hardest part will probably be that I have to learn GTK for the shader editor modifications. Send me an email if you would be interested in testing it before I release it. (I am a linux n00b)

  1. Yes, it compiles correctly and runs in hardware.

  2. Then everything goes back to normal and I get the working, unmodified shader.

  3. fragment + vertex

  4. Diagnostic: Unknown function glMultiDrawArraysEXT being logged.
    Diagnostic: Unknown function glMultiDrawElementsEXT being logged.
    Diagnostic: Unknown function glPointParameterfEXT being logged.
    Diagnostic: Unknown function glPointParameterfvEXT being logged.
    Diagnostic: Unknown function glAreTexturesResidentEXT being logged.
    Diagnostic: Unknown function glBindTextureEXT being logged.
    Diagnostic: Unknown function glDeleteTexturesEXT being logged.
    Diagnostic: Unknown function glGenTexturesEXT being logged.
    Diagnostic: Unknown function glIsTextureEXT being logged.
    Diagnostic: Unknown function glPrioritizeTexturesEXT being logged.
    Diagnostic: Unknown function glArrayElementEXT being logged.
    Diagnostic: Unknown function glColorPointerEXT being logged.
    Diagnostic: Unknown function glDrawArraysEXT being logged.
    Diagnostic: Unknown function glEdgeFlagPointerEXT being logged.
    Diagnostic: Unknown function glGetPointervEXT being logged.
    Diagnostic: Unknown function glIndexPointerEXT being logged.
    Diagnostic: Unknown function glNormalPointerEXT being logged.
    Diagnostic: Unknown function glTexCoordPointerEXT being logged.
    Diagnostic: Unknown function glVertexPointerEXT being logged.
    Diagnostic: Unknown function glAddSwapHintRectWIN being logged.

I don’t actually use those, it’s because I’m using GLEW. Then it warns that I don’t delete the shader objects, but that’s just my test program being lazy.

  1. ATI Radeon 9500 Pro with the latest drivers. I’ll whip up code to reload the shaders myself today and see if that works, if it doesn’t we know ATI is to blame :slight_smile:

OK, since you are on ATI it is probably the “unable to get uniform sampler values bug” as explained in the readme.txt. When I “re-compile” I create a new program and load in the new source(and setting attribute locations). I then bind and copy accross the uniform values whenever the old program is used. (where-ever possible) Since ATI have this bug where you cannot get the sampler values back from the origional GLSL program, all the samplers in the new program will sample from texture stage 0.

Does editing the shader to output a flat color (ie. = vec4(1.0,0.0,0.0,1.0); work?)

That’s what the fragment shader does!

Here is the fragment shader:

void main(void){
  gl_FragColor=vec4(1.0, 0.0, 0.0, 1.0);
}

and the vertex shader:

void main(void){
  gl_Position=gl_ModelViewProjectionMatrix*gl_Vertex;
}

I’ve been out on errands, but I’m going to try reloading the shaders manually now, and if that works it’s probably something relating to samplers or uniforms.

Well, reloading manually in my app works, but I don’t touch any uniforms at all then (the shader doesn’t have any!). Maybe glIntercept is using an invalid program object? But the compile messages says it’s valid… It’s a great tool anyway, I’ll just have to rely on reloading my shaders manually and just use your editor :slight_smile:

It would be nice if we could put a marker so that it would be easier to compare our source code with what we see in the xml

For example:

void RenderScene()
{
  glxxxxxxxxxxxxxx();
  glxxxxxxxxxxxxxx();
  glxxxxxxxxxxxxxx();
  glxxxxxxxxxxxxxx();

  glMarker(some number);
  glxxxxxxxxxxxxxx();
  glxxxxxxxxxxxxxx();
  glxxxxxxxxxxxxxx();
  glxxxxxxxxxxxxxx();

  glMarker(some number 2);

  glxxxxxxxxxxxxxx();
  glxxxxxxxxxxxxxx();
  glxxxxxxxxxxxxxx();
  glxxxxxxxxxxxxxx();

  glMarker(some number 3);

  SwapBuffers();
} 

I could use the marker numbers to find things quicker.

Are you calling glGetTexImage when I call glBindTexture?
For render to texture, it just shows a transparent square.
Maybe it’s the alpha that is zero?

I need to see the textures, no matter what alpha is.

For ATI cards, glGetTexImage on cubemaps still doesn’t work, so maybe you can write a workaround for this. Just render a fullscreen quad for each face, and glReadPixels.

****, this bug has existed since catalyst 1.0 or what?

First just a small update for upcoming 0.42:
-With some help from harsman I have figured out what was going wrong with ATI cards in GLSL compiling - (their uniform bug still exists tho)

-I have added in a lot of “legacy” function defintions and updated the legacy interfaces to work with games like Half-Life

Originally posted by V-man:
It would be nice if we could put a marker so that it would be easier to compare our source code with what we see in the xml

Yeah, I have though about this before. I might add this to 0.5 (no promises)

Are you calling glGetTexImage when I call glBindTexture?
For render to texture, it just shows a transparent square.
Maybe it’s the alpha that is zero?

I need to see the textures, no matter what alpha is.

I call glGetTexImage if it has not been saved previouly or it has been dirtied since the last save.

If you want to ignore the alpha, change from saving the textures as PNG to JPG.

There is also a ATI bug in p-buffer render to texture (not sure if it still extists) where they seem to generate a new texture ID when a p-buffer is bound to a texture. (calling glGet to get the currently bound texture returns an unknown ID)

For ATI cards, glGetTexImage on cubemaps still doesn’t work, so maybe you can write a workaround for this. Just render a fullscreen quad for each face, and glReadPixels.
****, this bug has existed since catalyst 1.0 or what?
I dunno how long this bug has existed. I am reluctant to alter my code to work around it. (esp. in that manner, It would be easy to corrupt the GL state - I try not to do anything that would alter the flow of the program)

I may start actively reporting these bugs as I find them to ATI and hope for a fix.

Originally posted by sqrt[-1]:

There is also a ATI bug in p-buffer render to texture (not sure if it still extists) where they seem to generate a new texture ID when a p-buffer is bound to a texture. (calling glGet to get the currently bound texture returns an unknown ID)

I don’t know cause I don’t do glGet calls normally.
You obviously do a lot in GLInterceptor so it shows all the problem areas.

Never mind putting a workaround for the cubemaps.

I think I have one texture that is not showing in th XML.
If I have a line like

glBindTexture(GL_TEXTURE_2D,2);

there is a small square showing the texture next to the number 2, but in one case I don’t see it.

Your tool is really nice but i work on GNU/Linux. Is there any info about a GNU/Linux version ?

He uses windows messages for IPC, so I doubt it’s a small job to convert to *nix.
Anyway, glIntercept is mainly useful for catching logical errors (incorrect enums, inefficient culling etc.), so just test your application on windows using glIntercept to ensure you’re not making any logical errors in your GL code, and then if you get problems running on linux you know it’s a driver/environment issue rather than your application.

Originally posted by knackered:
… just test your application on windows using glIntercept to ensure you’re not making any logical errors in your GL code, and then if you get problems running on linux you know it’s a driver/environment issue rather than your application.
That assumes that you’re able and willing to compile and run your app under Windows, which is not always the case. :wink:

He mentioned that he wants to do the Linux port for the next version. I’m willing to help to actually make it happen, but I agree with you that it’s not going to be trivial. More help welcome…

See about 10 posts above for the linux port info.
I will have at least the main logging/error checking capability in linux for the next version. The runtime shader editor may take a bit longer.

I use glLoadName(some-large-num) as markers. Great program!

Originally posted by V-man:
It would be nice if we could put a marker so that it would be easier to compare our source code with what we see in the xml

I like the idea, but I would prefer the marker to be a string (makes it easier to actually understand what’s going on).

There are some functions that could be hacked to do it (use glListBase(1000000); to mark the next glCallLists() as a marker or something like that), but the clean solution would be an extension. You could use your existing extension handling framework for initializing it, and you’d only have to call the functions whenever you’re actually running in the debugger.

It would probably be pretty simple to define, as it doesn’t have to do much, but it would be pretty useful for debugging things. This could also be used for profilers or stuff like that, to better differentiate between different phases/parts of the rendering.

Originally posted by dirk:
It would probably be pretty simple to define, as it doesn’t have to do much, but it would be pretty useful for debugging things. This could also be used for profilers or stuff like that, to better differentiate between different phases/parts of the rendering.
Just for the heck of it I wrote a definition of the extension. It’s here . For now this just something quickly thrown together to have something to talk about, and by no means close to being done.

But in the long run it would be nice if this or something like this was supported by all the different debuggers, and maybe even by the debugging versions of the drivers.

OK, strings would work for me, but I think there is no need to have a GL specification because it’s not a graphics feature.

All that is needed is a slightly modified gl header, maybe call it gl_debug.h, lib, and dll

>I use glLoadName(some-large-num) as markers. Great program!

I want the string version more for obvious reasons.

V-Man: The latest ATI drivers (Cat5.1) seems to fix the cubemap issue.