integer textures

hi,

i’d like to know how to build integer textures correctly and what format types are possilbe.

this is how i build my integer texture, GL doesnt throw an error, so i assume thats correct …

	glCreateTextures(GL_TEXTURE_2D, 1, &texture);
	glTextureStorage2D(texture, 1, GL_R32I, width, height);
	glTextureParameteri(texture, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
	glTextureParameteri(texture, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
	glTextureParameteri(texture, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
	glTextureParameteri(texture, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

the problem is i cant fill that texture with values without getting GL errors:

static vector<GLint> texture_data(width * height, -1);
glTextureSubImage2D(texture, 0, 0, 0, width, height, GL_R32I, GL_INT, texture_data.data());

i’m getting a GL_INVALID_OPERATION, debug output says:

OpenGL Debug Message:

ID: 1282
Source: API
Type: Error
Severity: High
Message: GL_INVALID_OPERATION error generated. Texture type and format combination is not valid.

so my questions:

  1. when do i have to use GL_RED_INTEGER ?
  2. what pairs of GL parameters do work to create integer textures (lets say 1 component 32bit int, 2 component 32bit unsigned int just to get the idea)

my assumption:
–> the “type” must be GL_R32I or GL_RG32UI

… thanks in advance!!

When you’re populating the red channel of a (signed or unsigned) integer texture with data.

The internalformat parameter to glTextureStorage* or glTexImage* should be GL_R32I and GL_RG32UI respectively. The format parameter to glTexImage*, glTexSubImage* etc should be GL_RED_INTEGER and GL_RG_INTEGER respectively. The type parameter should match the data (GL_INT or GL_UNSIGNED_INT for 32-bit signed or unsigned integers).

That’s the internal format, which needs to be supplied when you initialise the texture (glTexImage* or glTextureStorage*). When you’re uploading data, you need to specify the format of the data you’re supplying; for an integer texture, that must be one of the GL_*_INTEGER formats. The number of channels doesn’t need to match the internal format; excess channels are ignored, missing channels default to (0,0,0,1). The type parameter must match the supplied data in order to get meaningful behaviour, but isn’t otherwise constrained except that “packed” types must have the correct number of components for the format parameter.

thanks again!! :slight_smile:

currently i want to make order-independent transparency, therefore i need a “start index integer texture” that tells me what fragment to start the depth-sorting from …

struct Fragment {
int NextIndex;
uint RGBA;
float Depth;
int unsusedfornow;
};

my fragmentshader gives me following error message:

0(75) : error C1317: qualified actual parameter #1 cannot be converted to less qualified parameter (“im”)

the line is this:

#version 450 core

//layout (early_fragment_tests) in;


struct Material {
    vec3 Kd; // diffuse
    vec3 Ks; // specular
    float Ns; // shininess
};

struct DirectionalLight {
    vec3 Intensity;
    vec3 Direction;
};


in VS_FS_INTERFACE {
    vec3 position;
    vec3 normal;
} vertex;

// example material and other lighting properties
uniform Material material = Material ( vec3(0, 0, 0), vec3(0, 0, 0), 0 );
uniform vec3 AmbientIntensity = vec3(0, 0, 0);
uniform vec3 CameraPosition = vec3(0, 0, 0);


struct Fragment {
	int Next;
	uint Color;
	float Depth;
	int unused;
};

layout (std430, binding = 1) buffer FragmentBufferBlock { Fragment Fragments[]; };

layout (binding = 1, r32i) uniform readonly iimage2D StartIndices;

layout (binding = 1) uniform atomic_uint FragmentCounter;



void main ()
{
    // material
    vec3 Kd = material.Kd;
    vec3 Ks = material.Ks;
    float Ns = material.Ns;

    // light parts
    vec3 Ia = AmbientIntensity;
    vec3 Id = vec3(0, 0, 0);
    vec3 Is = vec3(0, 0, 0);

    // process light sources here ...
    DirectionalLight light = DirectionalLight( vec3(1, 1, 1), vec3(2, -1, -5) );
    vec3 N = normalize(vertex.normal);
    vec3 L = normalize(-light.Direction);
    vec3 R = normalize(reflect(light.Direction, N));
    vec3 V = normalize(CameraPosition - vertex.position);

    // diffuse Intensity
    Id = Id + light.Intensity * max(0, dot(N, L));

    // specular intensity
    Is = Is + light.Intensity * max(0, pow(max(0, dot(R, V)), max(1, Ns)));

    // final fragment color
    vec3 color = Kd * (Ia + Id) + Ks * Is;



	// OIT
	uint a = imageAtomicExchange(StartIndices, ivec2(0, 0), 0); // ERROR

	// request fragment index
	//uint fragment_index_me = atomicCounterIncrement(FragmentCounter);
	//uint fragment_index_stored = 0;
	//
	//Fragment fragment;
	//fragment.Next = int(fragment_index_stored);
	//fragment.Color = packUnorm4x8(vec4(color, 1));
	//fragment.Depth = gl_FragDepth;
	//fragment.unused = ...;

	// if index < buffersize !!!
	//Fragments[fragment_index_me] = fragment;
}

whats wrong with that ??

i’d like to bind an image with GL_READ_WRITE access, and use that image in 2 programs consecutively:

  1. write “fragment start index” in that image
  2. read that index again to do OIT somehow

[var]StartIndices[/var] is declared [var]readonly[/var] but you’re calling imageAtomicExchange() on it. If you just want to read the value, use imageLoad(). If you want to store values, remove the [var]readonly[/var] qualifier from the declaration.

… thanks again!

there still remains another error, i just cant get it to work :doh:

// 2. exchange start index with fragment index
uint next_index = imageAtomicExchange(StartIndices, ivec2(gl_FragCoord.xy), 20); // works!
//uint next_index = imageAtomicExchange(StartIndices, ivec2(gl_FragCoord.xy), fragment_index); // ERROR

somehow this atomic exchange will not work with a variable, but with constants it compiles (???)

0(81) : error C1115: unable to find compatible overloaded function “imageAtomicExchange(struct iimage2D1x32_bindless, ivec2, uint)”

#version 450 core

//layout (early_fragment_tests) in;


struct Material {
    vec3 Kd; // diffuse
    vec3 Ks; // specular
    float Ns; // shininess
};

struct DirectionalLight {
    vec3 Intensity;
    vec3 Direction;
};


in VS_FS_INTERFACE {
    vec3 position;
    vec3 normal;
} vertex;

// example material and other lighting properties
uniform Material material = Material ( vec3(0, 0, 0), vec3(0, 0, 0), 0 );
uniform vec3 AmbientIntensity = vec3(0, 0, 0);
uniform vec3 CameraPosition = vec3(0, 0, 0);


struct Fragment {
	int Next;
	uint Color;
	float Depth;
	int unused;
};

layout (std430, binding = 1) buffer FragmentBufferBlock { Fragment Fragments[]; };

layout (binding = 1, r32i) uniform iimage2D StartIndices;

layout (binding = 1) uniform atomic_uint FragmentCounter;



void main ()
{
    // material
    vec3 Kd = material.Kd;
    vec3 Ks = material.Ks;
    float Ns = material.Ns;

    // light parts
    vec3 Ia = AmbientIntensity;
    vec3 Id = vec3(0, 0, 0);
    vec3 Is = vec3(0, 0, 0);

    // process light sources here ...
    DirectionalLight light = DirectionalLight( vec3(1, 1, 1), vec3(2, -1, -5) );
    vec3 N = normalize(vertex.normal);
    vec3 L = normalize(-light.Direction);
    vec3 R = normalize(reflect(light.Direction, N));
    vec3 V = normalize(CameraPosition - vertex.position);

    // diffuse Intensity
    Id = Id + light.Intensity * max(0, dot(N, L));

    // specular intensity
    Is = Is + light.Intensity * max(0, pow(max(0, dot(R, V)), max(1, Ns)));

    // final fragment color
    vec3 color = Kd * (Ia + Id) + Ks * Is;



	// OIT
	/************************************************************************/
	// 1. get fragment index
	uint fragment_index = atomicCounterIncrement(FragmentCounter);

	// 2. exchange start index with fragment index
	uint next_index = imageAtomicExchange(StartIndices, ivec2(gl_FragCoord.xy), 20); // works!
	//uint next_index = imageAtomicExchange(StartIndices, ivec2(gl_FragCoord.xy), fragment_index); // ERROR

	// 3. set next index to exchanged value
	Fragments[fragment_index].Next = int(next_index);
	Fragments[fragment_index].Color = packUnorm4x8(vec4(color, 1));
	Fragments[fragment_index].Depth = gl_FragDepth;
	//Fragments[fragment_index].unused = ...;
	/************************************************************************/
}


The type of the third argument (data) must match the type of the image. Use a (signed) [var]int[/var] variable with [var]iimage*[/var] or a [var]uint[/var] variable with [var]uimage*[/var]. The return value will be [var]int[/var] or [var]uint[/var] respectively.

The format qualifier must also match (i.e. [var]r32i[/var] for [var]int[/var], [var]r32ui[/var] for [var]uint[/var]). The format qualifier must match the format of the image unit (i.e. the format parameter passed to glBindImageTexture()). The format of the image unit doesn’t have to match the texture’s internal format so long as it has the same total number of bits per pixel (so you can e.g. bind a GL_R32I texture to an image unit using the GL_R32UI format then use uimage2D/r32ui/uint in the shader, or vice versa).

changing the unit to int worked !!

again, i cant thank you enough … :slight_smile: