Giving floating values to the Viewport

Hey all,

I am having a problem in my application, where I am trying to draw some 2D objects in the screen. Each object’s pixel length is determined by a formula, exactly like a true type font’s pixel length is determined.

Quoting chapter 1 of True type Font specification:

Values in the em square are converted to values in the pixel coordinate system by multiplying them by a scale. This scale is:
pointSize * resolution 72 points per inch * units_per_em.
where pointSize is the size at which the glyph is to be displayed, and resolution is the resolution of the output device. The 72 in the denominator reflects the number of points per inch.
For example, assume that a glyph feature is 550 FUnits in length on a 72 dpi screen at 18 point. There are 2048 units per em. The following calculation reveals that the feature is 4.83 pixels long.
550 * 18 * 7272 * 2048 = 4.83

As you can see this formula gives floating points values and this is exactly what happens in my case too.

When drawing the objects before using the formula, I was using glViewport to set the part of the screen where the object would be drawn and then drew it setting gluOrtho2D to the bounding box of the element so that I can draw using local coordinates.

That worked all fine but now I have to use floating point values to get the part of the screen where each object will be drawn and I see that glViewport accepts only integers.

I saw that in OpenGl 4.0 there is a glViewportIndexed function which accepts floating point values but this does not exist in openGL ES 2.0 . And my application needs to be able to work with both.

So what approach would you recommend? Thanks in advance!

Edit: I have thought of rounding to the nearest integer value and try it like that of course but I am not sure how that would kill the looks of the elements and their proportions. Maybe I should try it so first and see.

I’m not sure to understand what you want to do. To my opinion, there’s absolutely no need to use floating point values with glViewport. The viewport is just intended to represent what portion, in screen coordinates, will be visible. Common uses of viewport are to set it with 0,0 as origin and window width and height as lenghts.
Then to move the view, simply change the parameters of your orthographic view or of your frustum.

Hm then I might not be understanding its use correctly.

Let me explain what I want to do.

I have an object. Coordinates are in the object’s local space. I want each object to take a certain amount of pixels on the screen, defined by the user.

That is why I thought that the way to do it would be to be setting the viewport for each object to the amount of pixels to be taken, and then draw it in its localspace. Scaling is be done automatically then. Then for the next object just do the whole thing again and repeat until you are done with all the objects.

I know it is a completely wrong way to do things and that I should just be moving by using glMove() or sending the proper transformation to my shader’s modelview matrix. But how can I control exactly the pixels each object will be drawn in?

To make it more clear:

    //render every object
    for(int ii = 0; ii < objectsN; ii ++)
        //get the ii object
        g = MakotoCore->getObject(ii);
        //determine the width and height (lenght) in pixels 
        float nPixelLength = (float)g->objectResolution * (pSize/upem);

        //set the viewport according to the position and dimensions of the element

        //make sure projection matrix is multiplied by id matrix

        //go one layer up in the modelview matrix

        //set the bounding box of the object

        //just testing without shaders for now

        glDrawElements(GL_TRIANGLES, g->indicesN, GL_UNSIGNED_SHORT, BUFFER_OFFSET(g->iIndex));

 //go backto the previous modelview layer
        //go back to the previous projection layer
        //make sure we get out of here by having the modelview selected
    }//end of rendering each character


Edit: The above does render them correctly, BUT I don’t know how to get accuracy sub-pixel. Say … I want an object to have 6.78 pixels length.

Moreover as you pointed out this whole method I use, is not nice, how would I be able to do the same, setting how many pixels each object would take without using glViewport, and in a more effective way?

I want an object to have 6.78 pixels length.

Antialiasing with GL_MULTISAMPLE ?

Moreover as you pointed out this whole method I use, is not nice, how would I be able to do the same, setting how many pixels each object would take without using glViewport, and in a more effective way?

All glViewport does is set up what area of the available window (or FBO if rendering to one) you want to render to. This has nothing to do with positioning objects within the viewport (assuming you set up your matrices correctly).

Alfonse, yes I know that but I could think of no other way to determine how many pixels each object would take.

I will read more on it and especially on Anti-aliasing as ZBuffer suggested and will come back here to re-post in this topic. I will also go back to doing it with normal positioning of objects like glTranslate … but still gotta figure out how to make sure that each obect takes up a certain amount of pixels.

After, if you do something like this:

glViewport (0,0,window_width,window_height);
glOrtho (0,0,window_width,window_height,-1,1);

it will be easy for you to know how much pixels an object is for width, height and deep.
And with few mathematics, you can even change your viewport and ortho to whatever you want, and know how many pixels you draw.

Yes I think you are right, arts. I got a little confused and need to get back to the code. As soon as I have it working and if I got any more question I will get back here.

Thanks all for the tips