render a quad - blank screen


I believe that this something every opengl programmer encounter initially . I am trying to render a quad as follows:

void loadGeometry()
   //load the vertices for the floor

   //make and bind the vertex array object

   //make and bind the vertex buffer object

   //put the quad verteices and texture coordinates into the VBO
   GLfloat vertexData[] = {-20.0f,0.0f,-20.0f, 0.0f,0.0f,
			   20.0f,0.0f,-20.0f,  20.0f,0.0f,
			   20.0f,0.0f,20.0f,  20.0f,20.0f,
			   -20.0f,0.0f,20.0f,  0.0f,20.0f};


   //connect the xyz of the vertexData to the "position" attribute of the vertex shader

   //connect the uv coordinates to the "posTexCoord" attribute of the vertex shader
			 (const GLvoid*)(3 * sizeof(GLfloat)));

   //unbind the vertex array object

And then i am rendering with the following snippet:

void render(double time)
   static const GLfloat one = 1.0f;


   //use the floor program

   //the floor texture is already bound, no need to bind it again


   //send the camera matrix to the shader

   //bind the VAO - the quad

   //draw the VAO - the quad

   //unbind the VAO


I am getting blank screen, and now i am clueless. I am not sure about the parameter of the glDrawArrays(…) though. A quad could be rendered as - one face with 2 triangles and each triangle with 3 vertices. So in total we shall be dealing with 3*2 = 6 elements to be rendered. Is it right ?

The camera is initialized as follows:

   camera = new tdogl::Camera();

I have changed the camera position as well, but no sight of the quad.

Some hint is appreciated to find the quad!


Does your vertex shader apply any transformation? I notice that your vertices all have 0.0 for the y coordinate. While I’m not familiar with the camera class you’re using, I suspect that it points in the negative z direction by default. So your quad would be degenerate if all y coordinates are identical, which means that no pixels are drawn. If this is your main problem, you either have to change your vertex coordinates to extend in the x and y direction, and use the same value for all z coordinates. Or change the direction the camera points.

I see a few other issues in your code:
[li]Your texture coordinates are in the range 0.0 to 20.0. Unless you intentionally plan to repeat your texture 20 times in each direction, you’ll probably want to use 0.0 to 1.0 as the range of your texture coordinates.
[/li][li]The value of a uniform variable for a texture sampler is the texture unit, not the texture id. Since you bind the texture to unit 0, you will want to pass 0 instead of floorTex to that setUniform() call.
[/li][li] If you want to draw the quad with the 4 vertices you defined, you need to pass 4 as the 3rd argument to glDrawArrays(), and use GL_QUADS (deprecated) or GL_TRIANGLE_MESH (with 3rd and 4th vertex swapped). If you want to use GL_TRIANGLES, you will need 6 vertices in your vertex array.

Yes, the vertex shader is applying the transformation as follows:

#version 430 core

uniform mat4 camera;
//uniform mat4 modelview;

in vec3 position;
in vec2 posTexCoord;
out vec2 fragTexCoord;

void main()
        // apply all the matrix multiplication to the vertex shader
	gl_Position = camera  * modelview *  vec4(position,1);

	//pass the tex coordinate straight through to the fragment shader
	fragTexCoord = posTexCoord;

And here goes all the details of the camera class:

#include <cmath>
#include "Camera.h"
#include <glm/gtc/matrix_transform.hpp>

using namespace tdogl;

static const float MaxVerticalAngle = 85.0f; //must be less than 90 to avoid gimbal lock

static inline float RadiansToDegrees(float radians)
    return radians * 180.0f / (float)M_PI;

Camera::Camera() :
    _position(0.0f, 0.0f, 1.0f),

const glm::vec3& Camera::position() const {
    return _position;

void Camera::setPosition(const glm::vec3& position) {
    _position = position;

void Camera::offsetPosition(const glm::vec3& offset) {
    _position += offset;

float Camera::fieldOfView() const {
    return _fieldOfView;

void Camera::setFieldOfView(float fieldOfView) {
    assert(fieldOfView > 0.0f && fieldOfView < 180.0f);
    _fieldOfView = fieldOfView;

float Camera::nearPlane() const {
    return _nearPlane;

float Camera::farPlane() const {
    return _farPlane;

void Camera::setNearAndFarPlanes(float nearPlane, float farPlane) {
    assert(nearPlane > 0.0f);
    assert(farPlane > nearPlane);
    _nearPlane = nearPlane;
    _farPlane = farPlane;

glm::mat4 Camera::orientation() const {
    glm::mat4 orientation;
    orientation = glm::rotate(orientation, _verticalAngle, glm::vec3(1,0,0));
    orientation = glm::rotate(orientation, _horizontalAngle, glm::vec3(0,1,0));
    return orientation;

void Camera::offsetOrientation(float upAngle, float rightAngle) {
    _horizontalAngle += rightAngle;
    _verticalAngle += upAngle;

void Camera::lookAt(glm::vec3 position) {
    assert(position != _position);
    glm::vec3 direction = glm::normalize(position - _position);
    _verticalAngle = RadiansToDegrees(asinf(-direction.y));
    _horizontalAngle = -RadiansToDegrees(atan2f(-direction.x, -direction.z));

float Camera::viewportAspectRatio() const {
    return _viewportAspectRatio;

void Camera::setViewportAspectRatio(float viewportAspectRatio) {
    assert(viewportAspectRatio > 0.0);
    _viewportAspectRatio = viewportAspectRatio;

glm::vec3 Camera::forward() const {
    glm::vec4 forward = glm::inverse(orientation()) * glm::vec4(0,0,-1,1);
    return glm::vec3(forward);

glm::vec3 Camera::right() const {
    glm::vec4 right = glm::inverse(orientation()) * glm::vec4(1,0,0,1);
    return glm::vec3(right);

glm::vec3 Camera::up() const {
    glm::vec4 up = glm::inverse(orientation()) * glm::vec4(0,1,0,1);
    return glm::vec3(up);

glm::mat4 Camera::matrix() const {
    return projection() * view();

glm::mat4 Camera::projection() const {
    return glm::perspective(_fieldOfView, _viewportAspectRatio, _nearPlane, _farPlane);

glm::mat4 Camera::view() const {
    return orientation() * glm::translate(glm::mat4(), -_position);

void Camera::normalizeAngles() {
    _horizontalAngle = fmodf(_horizontalAngle, 360.0f);
    //fmodf can return negative values, but this will make them all positive
    if(_horizontalAngle < 0.0f)
        _horizontalAngle += 360.0f;

    if(_verticalAngle > MaxVerticalAngle)
        _verticalAngle = MaxVerticalAngle;
    else if(_verticalAngle < -MaxVerticalAngle)
        _verticalAngle = -MaxVerticalAngle;

I am trying the port an old opengl application to the modern opengl. And the existing opengl application create the quad plane as follows:

   // view transform
   glRotatef(cameraRotLag[0], 1.0, 0.0, 0.0);
   glRotatef(cameraRotLag[1], 0.0, 1.0, 0.0);
   glTranslatef(cameraPosLag[0], cameraPosLag[1], cameraPosLag[2]);
   //pull out the model-view matrix
   glGetFloatv(GL_MODELVIEW_MATRIX, modelView);

        float s = 20.f;
        float rep = 20.f;
        glTexCoord2f(0.f, 0.f);glVertex3f(-s, 0, -s);
        glTexCoord2f(rep, 0.f);glVertex3f(s, 0, -s);
        glTexCoord2f(rep, rep);glVertex3f(s, 0, s);
        glTexCoord2f(0.f, rep);glVertex3f(-s, 0, s);

What do i have to rethink here ?

And to answer some of the issues that you have raised:

  1. Yes, i do want to repeat the texture as you may have noticed in the existing opengl code.
  2. Thanks for pointing it out!. I did the necessary changes, but it did not change the output
  3. I am using GL_TRIANGLES .

Since you have the snippet of the old opengl now. Some suggestions is appreciated to fit it within the new paradigm


Yes i m doing the transformation in the vertex shader

In that case , should i not see a floor bottom and the camera is looking down to the negative z-direction ? The camera up vector is (0,1,0). I am creating a perspective camera which you have noticed in my previous post. I did not get why do i need to change the direction that the camera points to.


Hmm, ok. I still suspect that your camera is not pointing at the quad. In the old code, there’s a couple of glRotate*() and a glTranslate*() calls that go into the MODELVIEW matrix. In the vertex shader you copied, you do apply a modelview matrix as well. But does your new code set a value for it? In fact, in the vertex shader you copied, the variable declaration for modelview is commented out. So I don’t think the shader would compile in this form. You may want to verify that the shaders compile successfully.

Beyond that, it looks like your camera points in the negative z direction, and your quad coordinates are in the xz plane. If that’s the case, you will either need to make sure that a rotation is applied to the quad, or put the coordinates in the xy plane instead.

To get the proper perspective, you’ll also need to set up a projection matrix, and apply it in your vertex shader. Should not be critical to at least see anything showing up, though.

You’ll also need to match up your vertex definitions and your draw call. With your original draw call, you’re trying to use 6 vertices, but only 4 are defined. Either add 2 more vertices so that you have the 6 vertices needed for two independent triangles. Or as I suggested in my previous post, use a different primitive type that only needs 4 vertices for the quad, and change the primitive type and number of vertices in the draw call accordingly.

The camera is pretty close as well, being only 9 units away from a 40x40 size quad. You should still see something so if everything else is correct, even if it’s only a small part of the quad.

The texture setup could be another potential problem. To debug these types of issues, I often just set a constant color in the fragment shader. This allows you to see if the geometry is placed properly, excluding other issues like texture setup.

[QUOTE=reto.koradi;1258812]Hmm, ok. I still suspect that your camera is not pointing at the quad. In the old code, there’s a couple of glRotate*() and a glTranslate*() calls that go into the MODELVIEW matrix. In the vertex shader you copied, you do apply a modelview matrix as well. But does your new code set a value for it? In fact, in the vertex shader you copied, the variable declaration for modelview is commented out. So I don’t think the shader would compile in this form. You may want to verify that the shaders compile successfully.

Sorry for posting the buggy code. I am actually not doing any model-view transformation at all.

I fixed the issue as follows:

   static const GLfloat vertexData[] =
	 -20.0f,-1.0f,-20.0f,   0.0f,0.0f,   // 0
	 20.0f, -1.0f,-20.0f,   20.0f,0.0f,  // 1
	 20.0f, -1.0f,20.0f,    20.0f,20.0f, // 2
	 -20.0f,-1.0f,20.0f,    0.0f,20.0f   // 3

   static const GLushort vertex_indices[] =

I am setting the perspective as follows by the camera class that i attached in my previous post over the same issue:


The camera is perspective one by default

Thanks for the correction!

Check the url

I got something new now!
As you can see the the window is expanded , but the viewport is not adjusted. I have the callback defined as follows:

   glfwSetFramebufferSizeCallback(window, framebuffer_size_callback);

void framebuffer_size_callback(GLFWwindow* window, int width, int height)
   float ratio = 1.0f;

   if(height > 0)
      ratio = (float)width / (float)height;

   //setup the viewport


What is it i am missing now?

Thanks for your effort over this issue!