2D Texture coords as GL_UNSIGNED_SHORT

Hello. I’m using OpenGL core profile 3.3 for making an SDL2 game in C++
I’ve always used texture coordinates in float format. Never had an issue

Now I want to use a shorter version of my Vertex2D struct, so I want to send texture coords to OpenGL as two unsigned shorts, hoping it will normalize them to (0, 1] floats

The problem is: the rendered sprite is entirely black
What am I doing wrong?
para_foro

My vertex struct:

using t_uvCoords = std::uint16_t;

struct Vertex2D_PosTexColor
{
    Vertex2D_PosTexColor(Vec2 pos, t_uvCoords U, t_uvCoords V, const ColorRGBA& col);

    Vec2 position; // -> 2 floats
    t_uvCoords u, v;
    ColorRGBA color; // -> 4 unsigned chars
};

static_assert(sizeof(Vertex2D_PosTexColor) == 16, "Error: sizeof(Vertex2D_PosTexColor) MUST be exactly 16 bytes!");

My VAO/VBO code:

    glGenVertexArrays(1, &m_VAO);
    glBindVertexArray(m_VAO);

    glGenBuffers(m_bufferObjects.size(), m_bufferObjects.data());
    glBindBuffer(GL_ARRAY_BUFFER, m_bufferObjects[0]);

    const auto stride = static_cast<int>(sizeof(Vertex2D_PosTexColor));

    // Position
    std::size_t offset = 0U;
    glVertexAttribPointer(K_VERTEX2D_ATTRIB_POSITION, 2, GL_FLOAT, GL_FALSE, stride, offsetToVoidPtr(offset));
    glEnableVertexAttribArray(K_VERTEX2D_ATTRIB_POSITION);
    // Texture coords
    offset += sizeof(float) * 2U;
    glVertexAttribPointer(K_VERTEX2D_ATTRIB_TEXCOORDS, 2, GL_UNSIGNED_SHORT, GL_TRUE, stride, offsetToVoidPtr(offset));
    glEnableVertexAttribArray(K_VERTEX2D_ATTRIB_TEXCOORDS);
    // Color
    offset += sizeof(t_uvCoords) * 2U;
    glVertexAttribPointer(K_VERTEX2D_ATTRIB_COLOR, 4, GL_UNSIGNED_BYTE, GL_TRUE, stride, offsetToVoidPtr(offset));

Rendering code:

    glBindVertexArray(m_VAO);
    glUseProgram(m_material.shader->getProgramName());

    // Camera matrix
    glUniformMatrix4fv(this->m_projMatrixLoc, 1, GL_FALSE, m_graphics.getCamera2D().getMatrix());

    // Texture unit
    glUniform1i(this->m_sampler2DLoc, m_material.texture->textureUnit);

    // Render everything
    glDrawElements(GL_TRIANGLES, m_elementCount, GL_UNSIGNED_SHORT, offsetToVoidPtr(0U));

Vertex shader:

#version 330 core

// Input from the VBO
layout(location = 0) in vec2 a_vertexPos;
layout(location = 1) in vec2 a_vertexTexCoords;
layout(location = 2) in vec4 a_vertexColor;

// Input from uniforms
uniform mat4 u_projection;

// Output to the fragment shader
smooth out vec2 fsTexCoords;
smooth out vec4 fsColor;

void main()
{
    vec4 pos = vec4(a_vertexPos.xy, 0.0f, 1.0f);
    gl_Position = u_projection * pos;

    fsTexCoords = a_vertexTexCoords;
    fsColor = a_vertexColor;
}

Fragment shader:

#version 330 core

// Input from the vertex shader
smooth in vec2 fsTexCoords;
smooth in vec4 fsColor;

// Input from uniforms
uniform sampler2D u_texUnit;

// Final fragment color
out vec4 finalColor;

void main()
{
    vec4 sampled = texture(u_texUnit, fsTexCoords);

    finalColor = fsColor * sampled;
}

Have you used a debugger to check the data being copied to the buffer, and the values passed to glVertexAttribPointer?

Do you have

    glEnableVertexAttribArray(K_VERTEX2D_ATTRIB_COLOR);

?

The code is cut off at the point I would expect to see that call.

Sorry, I forgot to paste that part of my code

// Set fixed vertex attribute locations for shaders
// layout(location = 0) in vec2 a_vertexPos;
const uint32_t K_VERTEX2D_ATTRIB_POSITION  = 0U;
// layout(location = 1) in vec2 a_vertexTexCoords;
const uint32_t K_VERTEX2D_ATTRIB_TEXCOORDS = 1U;
// layout(location = 2) in vec4 a_vertexColor;
const uint32_t K_VERTEX2D_ATTRIB_COLOR     = 2U;

I tried to use renderdoc 1.15, but when I launch my app, my code throws exception:
[C++] Window::queryContextCapabilities: OpenGL extension not supported:
GL_ARB_vertex_shader

This is the code for that function:

void Window::queryContextCapabilities()
{
    // Check for the required extensions
    std::vector<std::string> requiredExt =
    {
        "GL_ARB_vertex_array_object",       // VAO
        "GL_ARB_vertex_buffer_object",      // VBO
        "GL_ARB_vertex_shader",             // shaders
        "GL_ARB_fragment_shader",
        "GL_ARB_explicit_attrib_location"
    };

    for (const std::string& ext : requiredExt)
    {
        if (SDL_GL_ExtensionSupported(ext.c_str()) == SDL_FALSE)
            throw std::runtime_error(std::string("[C++] Window::queryContextCapabilities: OpenGL extension not supported:\n") + ext);
    }
}

Using manjaro KDE with nvidia propietary drivers, kernel 5.10 LTS, everything up to date

Indeed, I forgot to enable the color attribute. Now it works as expected. Thank you very much
I still don’t know whats wrong with renderdoc or my app

GL_ARB_vertex_shader and GL_ARB_fragment_shader are the outdated pre-GL2.0 GL_ARB_shader_objects extensions, which an OpenGL implementation is not required to support.

RenderDoc just doesn’t support them:

RenderDoc only supports the core profile of OpenGL - from 3.2 up to 4.6 inclusive. This means any compatibility profile functionality will generally not be supported.

Checking for them is certainly not going to accomplish what I assume you’re trying to do - verify that shaders are supported (as an aside: you’ve also made a common mistake here by omitting to tell us your actual graphics hardware; that’s the main factor in determining what’s supported, not the OS).

The core OpenGL 2.0 shader functionality was never published as extensions, so checking for extensions is precisely the wrong thing to do here. Check your GL_VERSION instead - if it’s 2.0 or higher you have shaders, absolutely guaranteed.

Sorry about that. Here it is

$ inxi -G
Graphics:  Device-1: NVIDIA GP104 [GeForce GTX 1070] driver: nvidia v: 470.63.01 
           Display: x11 server: X.Org 1.20.13 driver: loaded: nvidia resolution: 1920x1080~60Hz 
           OpenGL: renderer: NVIDIA GeForce GTX 1070/PCIe/SSE2 v: 4.6.0 NVIDIA 470.63.01

Yes, that’s what I’m trying to do. So I should remove GL_ARB_vertex_shader and GL_ARB_fragment_shader, the other ones I can keep, right?

What I’m actually doing is creating the GL context, then getting values from it
I’m requesting a 3.3 Core profile context, and getting exactly that
Is that good practice?

// Create OpenGL context
    m_context = SDL_GL_CreateContext(m_window);

    if (!m_context)
        throw std::runtime_error("[C++] Window::create: Cannot create OpenGL graphics context!");

    // Save context version number in a handy format
    // (May be different from the version number previously requested)
    // https://wiki.libsdl.org/SDL_GL_GetAttribute
    SDL_GL_GetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, &m_glSettings.major);
    SDL_GL_GetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, &m_glSettings.minor);

    m_context.version  = static_cast<double>(m_glSettings.major);
    m_context.version += static_cast<double>(m_glSettings.minor) / 10.0;

Your shaders are using

#version 330 core

If you have OpenGL 3.3, you don’t need any of those extensions as they’re all in core. ARB_vertex_buffer_object has been in core since 1.5, and if you’re using an older version than that then you probably have bigger issues than the lack of VBOs.

Allright. I will remove that entire function for checking extensions. Thanks

One more question
Let’s say I have a 512x512 texture atlas. How can I get the correct texture coords for a sprite as unsigned short numbers?
When I was working with floats, I did something like

float u = sprite.u / texture.width;
float v = sprite.v / texture.height;

Get the texture coordinates as a float in [0,1] then multiply by 65535. If you want to convert integer texel coordinates, multiply by 65535 then divide by 512.

Corollary: you can’t have texture coordinates which correspond exactly to texel edges/corners, as 65535 isn’t a multiple of 512. The error will be up to 1/128th of a texel, which is unlikely to be noticeable.

Mmm… So maybe using unsigned shorts for texture coords is not a great idea
Thanks for your help

Unsigned shorts or even unsigned bytes are fine for texture coordinates. If you actually need a specific grid, you can obtain that by transforming the coordinates in the vertex shader (or with the fixed-function pipeline, using the texture matrix).

If you want texture coordinates with the form x/2n to be transferred exactly, multiply by 32768 (rather than 65535) when converting to unsigned short then multiply by 65535.0/32768 in the vertex shader.

My idea is this
I have a 64x64 pixels sprite that resides at the 0,0 position of a 512x512 texture atlas
How can I send this coords to OpenGL? Top left vertex: 0, 0. Bottom right vertex: 64, 64

Something like this would work?

#version 330 core

// Input from the VBO
layout(location = 0) in vec2 a_vertexPos;
layout(location = 1) in vec2 a_vertexTexCoords;
layout(location = 2) in vec4 a_vertexColor;

// Input from uniforms
uniform mat4 u_projection;

// Output to the fragment shader
smooth out vec2 fsTexCoords;
smooth out vec4 fsColor;

void main()
{
    vec4 pos = vec4(a_vertexPos.xy, 0.0f, 1.0f);
    gl_Position = u_projection * pos;

    fsTexCoords = a_vertexTexCoords * 65535.0 / 32768.0;
    fsColor = a_vertexColor;
}

Yes. The unsigned shorts in the buffer should be in the range 0…32768 inclusive. Multiply pixel coordinates by 64 (left shift by 6). So the bottom-left 64x64 tile would be (0,0), (4096,0), (0,4096), (4096,4096).

Is there any way I can store regular unnormalized numbers in the VBO? I mean, the numbers you see in image editors (0… texture width, 0…texture height). How can I convert them entirely in the vertex shader?

Multiply by 65535 and divide by textureSize(...).

A VBO has no concept of data types, you can store anything you like, in any format you like, in one.

The question is how to access it meaningfully in your shader.

In this case you would use glVertexAttribIPointer (note the “I” before “Pointer” here, it’s easily missed), with, say, GL_UNSIGNED_SHORT for the type parameter.

Note that if you do this, the GLSL variables need to be ivec2/uvec2. If you use glVertexAttribIPointer with floating-point types, you’ll (probably) get the integer value reinterpreted as a float (as if by intBitsToFloat).

If you want the value as a vec2, you can just use glVertexAttribPointer with the normalized parameter set to GL_FALSE.

Either way, you’ll need to divide by the texture size to get values in [0,1] (unless you’re using texelFetch, which just does a lookup of a single texel from a specified level without any filtering or wrapping).

ivec2 textureSize(gsampler2D sampler, int lod);

You mean this function? So I need to move the sampler uniform to the vertex shader?
But if I do this, I get a fragment shader compile error

0(16) : error C1503: undefined variable "u_texUnit"

Here says that uniforms are global constants and they can be used in the entire shader program
(Khronos wiki Uniform, Idk why but I cannot include links in my post, sorry)

Why my fragment shader object cannot find the uniform?
Here is my new shaders code:

Vertex shader:

#version 330 core

// Input from the VBO
layout(location = 0) in vec2 a_vertexPos;
layout(location = 1) in vec2 a_vertexTexCoords;
layout(location = 2) in vec4 a_vertexColor;

// Input from uniforms
uniform mat4 u_projection;
uniform sampler2D u_texUnit;

// Output to the fragment shader
smooth out vec2 fsTexCoords;
smooth out vec4 fsColor;

void main()
{
    vec4 pos = vec4(a_vertexPos.xy, 0.0f, 1.0f);
    gl_Position = u_projection * pos;

    // Convert ushort coords to normalized float
    fsTexCoords = a_vertexTexCoords * 65535.0 / textureSize(u_texUnit, 0);
    // Copy color value to the fragment shader
    fsColor = a_vertexColor;
}

Fragment shader:

#version 330 core

// Input from the vertex shader
smooth in vec2 fsTexCoords;
smooth in vec4 fsColor;

// Input from uniforms
//uniform sampler2D u_texUnit;

// Final fragment color
out vec4 finalColor;

void main()
{
    vec4 sampled = texture(u_texUnit, fsTexCoords);

    finalColor = fsColor * sampled;
}

The sampler uniform needs to be declared in both shaders. It’s still only one variable, so it needs to have the same qualifiers in both declarations.