Hi,
I’m elaborating a bit with Golang and OpenGL and trying to send 2 integers as input to my vertex shader. And in the vertex shader I want to perfrom some bit operations on them. I can’t get it to work and I’m pretty sure it has to do with number of bits passed in between. The shader compiles. but I can’t retrieve my information as shown below.
I’m using a VBO with 2 strides. One int per stride.
gl.GenVertexArrays(1, &w.vao)
gl.GenBuffers(1, &w.vbo)
gl.BindVertexArray(w.vao)
gl.BindBuffer(gl.ARRAY_BUFFER, w.vbo)
gl.BufferData(gl.ARRAY_BUFFER, len(w.vertices)*4, gl.Ptr(w.vertices), gl.STREAM_DRAW)
gl.VertexAttribPointer(0, 1, gl.INT, false, 2*4, gl.PtrOffset(0))
gl.EnableVertexAttribArray(0)
gl.VertexAttribPointer(1, 1, gl.INT, false, 2*4, gl.PtrOffset(1*4))
gl.EnableVertexAttribArray(1)
I then pass an int32
array to the VBO.
gl.BindBuffer(gl.ARRAY_BUFFER, w.vbo)
gl.BufferData(gl.ARRAY_BUFFER, len(w.vertices)*4, gl.Ptr(w.vertices), gl.STREAM_DRAW)
And then I draw it using:
gl.BindVertexArray(w.vao)
gl.DrawArrays(gl.POINTS, 0, int32(len(w.vertices)))
The vertex shader looks like this:
#version 330 core
precision highp int;
precision highp float;
layout (location = 0) in int v1;
layout (location = 1) in int v2;
uniform mat4 model, view, projection;
out vec3 fragmentColor;
void main()
{
// Get colors
int r = (v1 >> 24) & 0xFF;
int g = (v1 >> 16) & 0xFF;
int b = (v2 >> 24) & 0xFF;
fragmentColor = vec3(float(r),float(g),float(b));
....
// position etc...
The problem I have is that I don’t seem to get my R,G,B colors from my ints. Unpacking my 2 integers outside of the shader (in the Go program), these operations works fine.
So I guess that even if I specify that I want high-precision integers, I don’t get it. So when I try to read the higher bits they are not set due to16 bit integer truncation.
Any idea what can be wrong here?
(I’ve tried passing float32, uint32 etc. Same problem…)