Shifting one bit to the left is the same as multiplying by two.
But I thought that would only work if you are treating the values as raw numbers – unsigned integers. The GPU assumes they are floating point values. It still works the same?

what’s the deal with this long, drawn out discourse in remedial math? you know the answer to this one, pal. im sure of it. after all, you’ve been given some code. if you don’t know the difference between integer and floating point math, you’re in trouble

if you don’t know the difference between integer and floating point math, you’re in trouble
i’m an artist with a background in computer science. i only got a B in Computer Engineering 111 and I never took a course above that. If the doom3 engine wasn’t underdeveloped, i wouldn’t even be having to ask these questions, but as it is, i have no choice but to teach myself pixel shading. The questions are valid. I was simply questioning whether or not that bit-shifting technique of multiply really worked with a GPU, because I was under the impression that you could only do that using CPU instructions when you told the CPU to treat the numbers as non-floating values. Being given code without much explanation is not understanding. So I questioned the details.

Also, I’m not interested in this only for the purpose of reproducing ADD_SIGNED2X in assembly. Doom3 meshes only support one set of texture coordinates. I have been entertaining the idea of getting more coordinates into the engine by storing two 16-bit floats for U and V in an RGBA bitmap and then reading that into the fragment shader. But I would need to do some bit shifting to conjunct the 8bit R with 8bit G to reconstruct the 16-bit coordinate float.

I implemented ADDSIGNED2x and MODULATE4X in ARB. They both worked right. But I’ve come to realize its not the “bit shifting” that’s important – its the basic arithmetic. And for that matter, i think the document that describes ADDSIGNED2X as shifting one bit left is incorrect. Unless someone cares to point out that bit shifting works the same with floats on the GPU as with raw integers, then i don’t think that the multiply by 2 function on the GPU is shifting bits at all!