The diffrence in OGL on Linux and win

zen: I also tryed your idea it helps but there seems to be things wrong with the model. I only worked on it for awhile since(heh no offence) marcuss idea seemed to be more useful and should do all the stuff i would have to figure out.

none taken.I just proposed this a quick hack if you couldn’t wait to see if this would fix your problem.Going with something like marcus’ idea in the long run is what I suggested as well

I was just playing with the code and I found some errors I made and fixed them so heres the new version

void Resize(float* scale, float* x, float* y, float* z)
{
float* av;
float minx;
float maxx;
float miny;
float maxy;
float minz;
float maxz;
float centerx;
float centery;
float centerz;
float scalex;
float scaley;
float scalez;
int i;

for(int a=0;i<header->numbodyparts;a++)
{
  SetupModel(a);

  for(int j=0;j<model->nummesh;j++)
  {
    //set up the vars to get the info
    Mesh* mesh = (Mesh*)((BYTE*)header + model->meshindex) + j;
    short* tricmds = (short*)((BYTE*)header + mesh->triindex);

    i = *tricmds;

    av = formverts[tricmds[0]];
    minx = maxx = av[0];
    miny = maxy = av[1];
    minz = maxz = av[2];
    
    //start going threw the model
    while(i = *(tricmds++))
    {
      //go threw each vertex
      for(;i>0;i--,tricmds+=4)
      {
        av = formverts[tricmds[0]];

        minx = min(minx, av[0]);
        maxx = max(maxx, av[0]);
        miny = min(miny, av[1]);
        maxy = max(maxy, av[1]);
        minz = min(minz, av[2]);
        maxz = max(maxz, av[2]);
      }
    }
  }
}

//get the center
centerx = (maxx + minx) / 2;
centery = (maxy + miny) / 2;
centerz = (maxz + minz) / 2;

//get the scale
scalex = 2 / (maxx - minx);
scaley = 2 / (maxy - miny);
scalez = 2 / (maxz - minz);

//set up the scale var
scale = new float;

//get the min of all of them
float tmp1 = min(scalex, scaley);
float tmp2 = min(scalex, scalez);
float tmp3 = min(scaley, scalez);

//see which one is the smallest and sent it to scale
if(tmp1 < tmp2)
{
  *scale = min(tmp2, tmp3);
}else{
  *scale = min(tmp1, tmp3);
}

//set up the x y z var and give them there values
x = y = z = new float;
*x = (-centerx);
*y = (-centery);
*z = (-centerz);

}

One thing does bother me though, I had some error code in and this is what the output was

scalex = -inf
scaley = 4.92367e+33
scalez = 78.5178
tmp1 = -inf
tmp2 = -inf
tmp3 = 78.5178

It was the same each time what troubles me is the -inf part. Each time the funtion returns the vars this is what is sent out
scale = 2.33772
x = 4.05294e-34
y = 7.17465e-43
z = 4.06732e-34

The x, y, z stuff seems ok but usally when I use scale its not over 0.9 could that be way when I use this code it looks so close up?

Ill keep playing with it.

Thanks for all your help so far!

Nuke

Originally posted by nukem:
[b]One thing does bother me though, I had some error code in and this is what the output was

scalex = -inf
scaley = 4.92367e+33
scalez = 78.5178
tmp1 = -inf
tmp2 = -inf
tmp3 = 78.5178[/b]

Are your pointers correct? I’m having trouble following the code since I don’t know your global variables. Yet, it seems a bit messy regarding the pointers.

Originally posted by nukem:
The x, y, z stuff seems ok but usally when I use scale its not over 0.9 could that be way when I use this code it looks so close up?

If the model looks close up, either move the camera further away (remember to change the znear and zfar values too), or change the field of view of the camera (increase the angle). I usually use a camera FOV of 50-90 degrees. The larger the angle, the “smaller” the objects become, but the image also gets more “distorted” (which can be cool sometimes, but perhaps not for a modelviewer).

Remember, the OpenGL camera can be thought of as a real camera. If you can’t fit your “target” into the view of the camera, you usually move away from the object, or “zoom out” (i.e. increase the field of view). People are also often confused with how the OpenGL “basic unit” works (e.g. “How big is 1.0 unit in OpenGL?”). The answer is that you make the decision. I usually think of one OpenGL unit as one meter. That way it’s easy to verify distances and sizes intuitively (e.g. placing the camera 3.0 units above the floor would not make the view realistic if it’s supposed to be the view of a human being - 1.8 gives a better impression). Of course, it requires that all objects etc use the same measurment system.

In your case, you can think of the model (which can be any size, really - everything from an ant to a cathedral) as being “boxed up” in a virual 2x2x2 cube (just as if you were aiming your camera at a TV). When you place your camera, try to get this cube to fill the screen/window, without exceeding it.

Uhm, hope I’m not overdoing this… I just felt like typing, that’s all

[This message has been edited by marcus256 (edited 02-10-2003).]

One more thing: you never answered if you use 16 or 24 color bits under Linux. Have you tried changing your Windows video mode to 16 BPP (65536 colors / “high color”), and seen what happens?

Ok I found all the mistakes in that funtion and corrected them. Now all the values are good, it returns this.
scale = 2.33772
x = 4.05282e-34
y = 7.17465e-43
z = 4.0672e-34

One thing is wrong when I pass varibles that are undeclared to it the funtion works fine. If they are declared then it wont work and it will remain that value.

It also is all the way zoomed in im looking into how to do it with OGL. It does how ever look smoother but I wont know if it really worked untill I get it zoomed out.

Here is the Resize funtion

void Resize(float* scale, float* x, float* y, float* z)
{
float* av;
float minx = 0.0;
float maxx = 0.0;
float miny = 0.0;
float maxy = 0.0;
float minz = 0.0;
float maxz = 0.0;
float centerx;
float centery;
float centerz;
float scalex;
float scaley;
float scalez;
int i;
BYTE* vertbone;
vect3* studioverts;

SetUpBones();

for(int a=0;a<header->numbodyparts;a++)
{
  SetupModel(a);

  vertbone = ((BYTE*)header + model->vertinfoindex);
  studioverts = (vect3*)((BYTE*)header + model->vertindex);
  
  for(i=0;i<model->numverts;i++)
  {
    VectorTransform(studioverts[i], bonetransform[vertbone[i]], formverts[i]);
  }

  for(int j=0;j<model->nummesh;j++)
  {
    //set up the vars to get the info
    Mesh* mesh = (Mesh*)((BYTE*)header + model->meshindex) + j;
    short* tricmds = (short*)((BYTE*)header + mesh->triindex);

    //start going threw the model
    while(i = *(tricmds++))
    {
      if(i < 0)
      {
        i = -i;
      }
      
      //go threw each vertex
      for(;i>0;i--,tricmds+=4)
      {
        av = formverts[tricmds[0]];

        if((minx == 0) && (maxx == 0) && (miny == 0) && (maxy == 0) && (minz == 0) && (maxz == 0))
        {
          minx = maxx = av[0];
          miny = maxy = av[1];
          minz = maxz = av[2];
        }else{
          minx = min(minx, av[0]);
          maxx = max(maxx, av[0]);
          miny = min(miny, av[1]);
          maxy = max(maxy, av[1]);
          minz = min(minz, av[2]);
          maxz = max(maxz, av[2]);
        }
      }
    }
  }
}

//get the center
centerx = (maxx + minx) / 2;
centery = (maxy + miny) / 2;
centerz = (maxz + minz) / 2;

//get the scale
scalex = 2 / (maxx - minx);
scaley = 2 / (maxy - miny);
scalez = 2 / (maxz - minz);

//set up the scale var
scale = new float;

//get the min of all of them
float tmp1 = min(scalex, scaley);
float tmp2 = min(scalex, scalez);
float tmp3 = min(scaley, scalez);

//see which one is the smallest and sent it to scale
if(tmp1 < tmp2)
{
  *scale = min(tmp2, tmp3);                    
}else{
  *scale = min(tmp1, tmp3);
}

//set up the x y z var and give them there values
x = y = z = new float;
*x = (-centerx);
*y = (-centery);
*z = (-centerz);

}

To get the camera right im looking into using the gluLookAt, but this might take awhile to get. If you have any suggestions on the camera thing tell me.

Here is the display code.

void display()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

float scale;
float x;
float y;
float z;

mdl.Resize(&scale, &x, &y, &z);

/*
cout << "scale = " << scale << endl;
cout << "x = " << x << endl;
cout << "y = " << y << endl;
cout << "z = " << z << endl;
*/

glPushMatrix();

  glTranslatef(x, y, z);
  //glTranslatef(transx, transy, transz);
  glRotatef(rotx, 0, 1, 0);
  glRotatef(roty, 1, 0, 0);

  glScalef(scale, scale, scale);
  //glScalef(0.01, 0.01, 0.01);
  glCullFace(GL_FRONT);
  glEnable(GL_DEPTH_TEST);

  DisplayMDL();

glPopMatrix();

glutSwapBuffers();

glutPostRedisplay();
}

Thanks for all your help!

Nuke

[This message has been edited by nukem (edited 02-12-2003).]

I was playing with gluPerspective and the best I can do with that is to get the weist. I set znear to 1 and zfar to 1000 anything bigger did nothing.

HAHA,
I know what the problem is

//you Must reset mask values, or
//zbuffer wont clear
glColorMask(1,1,1,1);
glDepthMask(1);
glClearColor(Red,Grn,Blu,Alp);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);

ok, now that i fixed your code, help me make a x11 opengl sample. I cant compile the nehe ones.

I tryed that but it didnt work :. Id be happy to help you, I thought I saw you make a post so ill tell you there if you didnt make one . You should really consider registering.

hmm
you should move the viewpoint to see if your problem is the zbuffer not clearing. that is a different problem.

It appears to me, however, that your zbuffering is not enabled.

check to see that you
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);


I got the linux+nehe samples to compile finally. I had to install xfree86 which installed the /GL directories.!

im a noob!

ok, i looked at your code, and your missing :

glClearDepth(1.0f);//clear value
glDepthMask(TRUE);//zbuff write on

if that doesnt work, i guess i cant help

Tryed that put those two lines as the first two in Display() but nothing. I think marcus256 is right I just gotta get the camera to zoom out. If you need any more help post on this fourm I check it alot.

This looks wrong:

//set up the x y z var and give them there values
x = y = z = new float;
*x = (-centerx);
*y = (-centery);
*z = (-centerz);

I may be mistaking (not a C++ wizard), but in C, x, y and z are declard in the calling routine (you already do that!), so “new” should not be used! Either I’m not understanding how C++ works, or you are confused with how pointers work.

The same thing goes for ‘scale’, by the way.

By the way, I would change “2 /” and “/ 2” (from my pseudo code) to “2.0f /” and “/ 2.0f”, respectively (it’s better coding practice - should not matter much if your compiler isn’t buggy).

I think you should use gluPerspective and gluLookAt. gluLookAt is really easy to use:

gluLookAt( eyeX, eyeY, eyeZ,
centerX, centerY, centerZ,
upX, upY, upZ );

eye is where your camera is.
center is where you are looking at.
up is your “up” vector (“pointing to the sky”).

If Z is your up vector, you can use:

// Set up camera
glMatrixMode( GL_PPROJECTION );
glLoadIdentity();
gluPerspective( field_of_view, // “zoom”
aspect, // Window width/height
5.0, // Znear
7.0 ); // Zfar
glMatrixMode( GL_MODELVIEW );
glLoadIdentity();
gluLookAt( 0.0, -6.0, 0.0,
0.0, 0.0, 0.0,
0.0, 0.0, 1.0 );

// Rotate model
glRotatef(rotx, 0.0f, 1.0f, 0.0f);
glRotatef(roty, 1.0f, 0.0f, 0.0f);

// Condfine model to predefined box
glScalef(scale, scale, scale);
glTranslatef(x, y, z);

// Enable depth test and face culling
glCullFace(GL_FRONT);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);

// Draw model
DisplayMDL();

NOTE: The order in which you perform the model transformations is important! Think of it as doing them in the reverse order compared to how they are written in the code. I.e. in the code above, the model is first translated to (0,0,0), then scaled, then rotated about the y axis, and finally rotated about the x axis.

If Y is your up axis, swap the y & z arguments of gluLookAt. You may also want to experiment with different distances and “zoom” values. Note that znear & zfar here assume that the model never exceeds your -1/+1 box. If you start rotating your model, you should extend the range to +/-sqrt(2) (+/- 1.5 is a good approximation ).

You may want to check out http://hem.passagen.se/opengl/tutorials/ - the “camera” section.

HelloKitty,
Your suggestions are actually the OpenGL defaults, so you do not have to set those explicitly (unless you have changed them in the past). I would also suggest glDepthFunc( GL_LEQUAL ) rather than GL_LESS.

[This message has been edited by marcus256 (edited 02-17-2003).]

[This message has been edited by marcus256 (edited 02-17-2003).]

Ive been shoveling all day huge storm here. I just changed the settings with gluLookAt to
gluLookAt(0.0, -200.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0) and it gave me this. Ill play with the other funtion some more.

[This message has been edited by nukem (edited 02-17-2003).]

-200 sounds a bit much. What’s your gluPerspective parameters now? Is field of view > 60 degrees (not likely).

Here is the hole Display funtion

void display()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

float scale;
float x;
float y;
float z;

mdl.Resize(&scale, &x, &y, &z);

glPushMatrix();
glPushMatrix();
gluPerspective(200, (512/512), z, y);
gluLookAt(0.0, -6.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0);

  	glTranslatef(x, y, z);
  	//glTranslatef(transx, transy, transz);
  	glRotatef(rotx, 0, 1, 0);
  	glRotatef(roty, 1, 0, 0);

  	glScalef(scale, scale, scale);
  	//glScalef(0.01, 0.01, 0.01);
  	glCullFace(GL_FRONT);
  	glEnable(GL_DEPTH_TEST);

  	DisplayMDL();

  glPopMatrix();

glPopMatrix();

glutSwapBuffers();

glutPostRedisplay();
}

It looks all streched out. Anything below 200 its just a close up of an area.

The problem is obviously your zbuffer precision.

The code to create the visual on Linux is a GLX visual attributes array, on Windows it’s WGL Pixel format desctiptor structure.

It seems likely that there is either a difference in the depth buffer bits requested between platforms or the drivers and heuristics of the attribute creation differ enough to choose different visuals on the platforms. This may be a result of your code or some middlewear you are using for windowing etc. It may also be a difference in available visuals which could vary with the drivers or the OS even with the same hardware.

Look at the code that creates the visual, try requesting 1 (one) bit of depth buffer, and you should get the maximum available. Make sure you request other attributes at the same depth on each platform since visuals tend to link depth precision with certain stencil attributes and even color depth. They all interract in less than intuitive ways to give you the final result.

I’m pretty sure this is your problem and explains the difference between platforms. It’s not a bug per se. You can querry the depth buffer bits from your OpenGL context to actually determine what you wind up with and the code will be portable.

int bitcount;

glGetIntegerv(GL_DEPTH_BITS, &bitcount);

should give you the depth buffer precision on all platforms, make sure your context is active when making this call.

There’s a bit of an art to selecting the right visual and normally code enumerates visuals and chooses one or has a declining sequence of preferences that get explicitly selected until a chosen attribute combination is granted. Beyond this platform specific knowledge based on experience with different cards can be used to select desirable visuals based on the hardware and vendor strings discovered. It is not a straightforward thing, especially when you consider the performance implications of different visuals.

[This message has been edited by dorbie (edited 02-18-2003).]

dorbie: how would I go about requesting 1 bit of a depth buffer? Im not sure how to do this. When I used your code the glGetInegerv thing it gave me 0, according to you im thinking thats a problem. I threw winex and it displayed fine, I thought that would just send it to GLX on linux, am I wrong?

Thanks for the help
Nuke

As I said, make sure the context is active before the get call. If you’re unclear, make the get call somewhere during your drawing operations (just as a test).

The value remember is placed in the “bitcount” variable, it’s not the return value of the function, sorry if this is too obvious.

You should not get zero on either platform, unless perhaps you have no depth buffer requested at all and winex accidentally gives you one.

The difference is most likely created in the winex layer if you send the same attributes to it.

For X you’ll see something like this in your code:

int attribList[]={GLX_RGBA, GLX_RED_SIZE, 1, GLX_GREEN_SIZE, 1, GLX_BLUE_SIZE, 1, GLX_DEPTH_SIZE, 24, None};

glXChooseVisual(dpy,DefaultScreen(dpy),attribList);

Here you just edit the number following the GLX_DEPTH_SIZE token in the array. That’s the number of depth bits you are requesting.

This assumes of course that you’re using low level GLX and not some other windowing layer like GLUT to create the graphics context.

Hmm… I see that winex actually translates the other way, so your’e looking for the PixelFormatDescriptor structure.

From:
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/opengl/ntopnglr_73jm .asp

typedef struct tagPIXELFORMATDESCRIPTOR { // pfd
WORD nSize;
WORD nVersion;
DWORD dwFlags;
BYTE iPixelType;
BYTE cColorBits;
BYTE cRedBits;
BYTE cRedShift;
BYTE cGreenBits;
BYTE cGreenShift;
BYTE cBlueBits;
BYTE cBlueShift;
BYTE cAlphaBits;
BYTE cAlphaShift;
BYTE cAccumBits;
BYTE cAccumRedBits;
BYTE cAccumGreenBits;
BYTE cAccumBlueBits;
BYTE cAccumAlphaBits;
BYTE cDepthBits;
BYTE cStencilBits;
BYTE cAuxBuffers;
BYTE iLayerType;
BYTE bReserved;
DWORD dwLayerMask;
DWORD dwVisibleMask;
DWORD dwDamageMask;
} PIXELFORMATDESCRIPTOR;

int ChoosePixelFormat(
HDC hdc, // device context to search for a best pixel format
// match
CONST PIXELFORMATDESCRIPTOR * ppfd
// pixel format for which a best match is sought
);

The cDepthBits field selects the depth buffer bits for the count. Again this assumes you’re using the low level system interface for that platform and not some other windowing middlewear.

[This message has been edited by dorbie (edited 02-19-2003).]

Hi Nukem,

As I said, transformation order matters - yours will not work correctly! (the model will not end up in the center of the universe)

field of view = 200 is too much! (it means 200 degrees of viewing angle)

I did an experiment with a simple +/- 1 cube, and I came up with this code, which reslts in a good window fit for the cube (even when the cube is rotating):

    // Select and setup the projection matrix
    glMatrixMode( GL_PROJECTION );
    glLoadIdentity();
    gluPerspective( 55.0,
                    (GLdouble)width/(GLdouble)height,
                    2.0,
                    6.0 );

    // Select and setup the modelview matrix
    glMatrixMode( GL_MODELVIEW );
    glLoadIdentity();
    gluLookAt( 0.0, -4.0, 0.0,  // Eye-position
               0.0, 0.0, 0.0,   // View-point
               0.0, 0.0, 1.0 ); // Up-vector

To get the window size (width & height) I use the following:

    // Get window size (may be different than the requested size)
    glfwGetWindowSize( &width, &height );
    height = height > 0 ? height : 1;

Since you’re probably not using GLFW, you probably use some other method (glutReshapeFunc?).

Hope this helps…