floatBitsToUint question, does GPU have any way to have real 0, not floating point 0, in runtime

Maybe this is not a bug and just my misunderstanding, idk…

Shader with bug on shadertoy link

Correct result white screen

Real result, on Nvidia:
in OpenGL mode - left black, right white
in ANGLE(DX11) - black output, removing +val_0 make floatBitsToUint work
in Vulkan - same as on OpenGL

Shader code, copy form shadertoy:

void mainImage( out vec4 fragColor, in vec2 fragCoord )
    int tval=min(0,iFrame); //0 always, edit i to int tval=0; to see what this bug about
    int idx=0;
    vec2 uv = fragCoord/iResolution.xy;
    uint val=0u;
    //not equal, white if they equal, black if they not
    vec3 col =vec3(float(val&0xffu)/256.);
    if((idx!=1)){ //idx==1 at this point, it correct (or red color)
    if((0.*float(tval<-1?1:0+0*(idx++)))!=(1.*float(tval<-1?1:0+0*(idx++)))){ //they equal (or green color)
    fragColor = vec4(col,1.0);

1.*float(tval<-1?1:0+0*(idx++)) return “not 0” and make floatBitsToUint generate broken result.
when 0.*float(tval<-1?1:0+0*(idx++)) return 0 (in OpnGL and Vulkan) and make floatBitsToUint return correct result.

It’s not clear what this code is actually supposed to mean or do. In particular, what is tval<-1?1:0+0*(idx++) supposed to accomplish?

OK, let’s try that with a bit more detail: why do you believe that this expression should only result in that? What do you believe the value of this expression ought to be, and why do you believe this? Break the expression down into its component parts and explain why you think that’s what will happen.

I’m not saying you’re wrong, but when dealing with deliberately obfuscated code, you need to show your work.

I dont understand
you can just open shadertoy link and see yourself

shader code is obvious, I think
first line
int tval=min(0,iFrame); //0 always
this value tval is always 0
so if you edit it to int tval=0; result must be equal

with 0 visual result full screeen white
with min(0,iFrame); is not white

other shader code to display that result not same when tval set to 0 in runtime

okey I think I understand:


0.* operation removed by compiler/optimizer
and only left is idx++ that executed correctly



calculated in runtime and have float 0., that do impact on float bits.

and this is false, they equal


because compiler/optimizer replace 0. as float 0. at this point

So there no solution to "have real 0, not float 0 for floatBitsToUint function, float 0. ruin float bits.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.