Do not deprecate GL_QUADS please

I just read the spec of EXT_geometry_shader4 extension.
It seems that we can use them to emulate quads.

For example, set the imput primitive type of the GS to LINES_ADJACENCY_EXT, and it will expect 4 vertices per input primitive. Then the GS can output a TRIANGLE_STRIP with 2 triangles, and we are done.

Actually the GSs can do even better job - you can do “point sprites” which are not screen-space, not squares, not axis-aligned: set the input prim type of the GS to POINTS and the output type to TRIANGLE_STRIP (with 2 triangles), and in the GS you can calculate the output (4) verts from the input one in any way you want.

Then again using GS probably has certain performance cost whereas the QUADS are free.

If we aren’t going to get our quads back could we at least update the wiki/FAQ/whatever with the best replacement for things like text rendering or the other common uses for them?

I think the most common method is the repeating index buffer 0,1,2,0,2,3,4,5,6,4,6,7… but I have no idea if that’s the best method.

Since a quad is two triangles, a polygon is too “only” triangles
=> so why don’t deprecate GL_POLYGON too ?

A sphere or a cube are too displayed using triangle/quads
=> so why don’t deprecate them from the GLU api too ?

After all, a screen is “only” a 2D array of pixels
=> why don’t deprecate glReadPixels and replace this by a more simple glReadOnlyOnePixel that we have to call millions time per frame ? :slight_smile:

I think that the future of OpenGL is more to add new primitives that to deprecate primitives that are extensively used since a lot of time …
(but this is only my point of view of course, perhaps not the point of view of the majority)

“One step forward, two steps back” seem to be the new approach of this second decade of this second millenium :slight_smile:

Since a quad is two triangles, a polygon is too “only” triangles
=> so why don’t deprecate GL_POLYGON too ?

A sphere or a cube are too displayed using triangle/quads
=> so why don’t deprecate them from the GLU api too ?

After all, a screen is “only” a 2D array of pixels
=> why don’t deprecate glReadPixels and replace this by a more simple glReadOnlyOnePixel that we have to call millions time per frame ?

I’m not sure what you’re arguing for here. Your first two points are exactly the reasons those things were deprecated. It sounds like you’re agreeing with the deprecation.

Your last one is an obvious strawman, which suggests that you don’t like getting rid of GL_POLYGON and the GLU stuff.

I think that the future of OpenGL is more to add new primitives that to deprecate primitives that are extensively used since a lot of time …

How did you come to this conclusion? Between GL_PATCH, tessellation, and geometry shaders, I can almost guarantee that you won’t see a new primitive type coming out of OpenGL anytime soon. There simply isn’t any point, since users can now generate whatever primitives they may want.

GL_POLYGON is in fact deprecated as any non-native primitive type. The GPUs are designed to work natively only with the primitive types left in core GL. Anyway, you can still produce any of the deprecated primitive types using a geometry shader and adjacency primitive types and/or patches (as Alfonse said).

GLU is an utility library, you can do all those things directly using OpenGL, it has nothing to do with the deprecation mechanism of the GL spec. Actually in fact the whole GLU is deprecated as it uses deprecated functionality internally :slight_smile:

This argument is just non-sense, but in order to break your argument, consider this:
We need glReadPixels as even though glReadOnlyOnePixel would be enough to do the same thing, it would be not a good choice performance-wise and GL is meant to be a high performance API. In case of the primitive types, having GL_QUADS does not provide additional performance over using triangles as internally it will be decomposed to triangles anyway. The only hardware I know that supported quads natively was some Matrox GPUs that used quads instead of triangles internally.

Again, as Alfonse said, I don’t think we will need any new primitive types considering that we have geometry shaders, adjacency primitives and patches.

Ok, for example the basic old method for to draw a simple quad in 2D :


   glBegin(GL_QUADS);                     
        glVertex2fv(p1);
        glVertex2fv(p2);
        glVertex2fv(p3);
        glVertex2fv(p4);      
    glEnd();                            

Can to be a day the new method that use vertex/fragment/geometry shaders, adjacency primitives and others vertex buffers as simple than this old method for persons that begin to study OpenGL ???

For information, I have explained this some days ago (cf. how to draw a simple quad) to a 10 year old child and he has found that this more easier than the “turtle language” :slight_smile:
(the more difficult was to explain what are the x and y coordinates, but with a sheet with smalls tiles for to see what are the *p1, *p2, *p3 and *p4 coordinates he have very speedly understand …)

But Ok, I haven’t tested to explain all the other OpenGL stuff necessary for the init/refresh/handle keys and mouse motions/clics, etc … because I find that he’s really a too young child for to understand all this :slight_smile:

Ok, OpenGL is of course normally a (very very) fast API (if hardware accellerated), but if it can stay as easy to use that before it would be too a good thing …

Can to be a day the new method that use vertex/fragment/geometry shaders, adjacency primitives and others vertex buffers as simple than this old method for persons that begin to study OpenGL ???

No, but what do they care about drawing quads? Why would you teach them quad drawing to begin with?

I have explained this some days ago (cf. how to draw a simple quad) to a 10 year old child and he has found that this more easier than the “turtle language”

… and? I could similarly write a layer on top of core OpenGL that made quad drawing even simpler to explain to a child. That doesn’t mean that it should be part of OpenGL.

OpenGL doesn’t exist to be easy to explain to children. You can always wrap complex APIs in higher-level code if you want to explain something simply.

No, but what do they care about drawing quads? Why would you teach them quad drawing to begin with?

Because after the point, the line and the triangle, the quad is the next more simple 2D primitive to teach :slight_smile:

You can always wrap complex APIs in higher-level code if you want to explain something simply.

Why do you want to force persons to reinvent the wheel ???

OpenGL was really simple from the beginning and during a big numbers of years. This is only from last years that this API has begin to become very complex to use :frowning:
(note that this is exactely the inverse of the DirectX evolution => Microsoft haven’t certainly make this simplification for nothing …)

But something that I cannot understand :frowning:
=> actual hardware can directly handle in hardware things like bezier curves/patchs, nurbs and others curved PN triangles but not quads ???

Why do you want to force persons to reinvent the wheel ???

Because this particular wheel shouldn’t have been invented in the first place, to use your analogy. What does it matter to your 10 year old if he’s learning OpenGL or some API you wrote for him?

Also, one question mark, not three.

OpenGL was really simple from the beginning and during a big numbers of years.

Yes. The number of years when people stopped using OpenGL. However “really simple” it was, it wasn’t useful to the people who matter.

note that this is exactely the inverse of the DirectX evolution => Microsoft haven’t certainly make this simplification for nothing …

Except that it isn’t. D3D v3 was Godawful; it was barely a rendering API. Microsoft has been making their API better, because it started off terrible; they could only really go up from there.

D3D 10+ doesn’t have quads either. Or immediate mode. Or any of the things that you suggest make OpenGL easier to learn.

actual hardware can directly handle in hardware things like bezier curves/patchs, nurbs and others curved PN triangles but not quads ???

It can only do those things by writing specialized shaders to compute them.

Note that if this is not really a problem for me (because I use since some years vertex array, fragment/vertex shaders and since some time geometry shaders), this is really a problem for a big number of others persons … :frowning:

It’s really as complex that this to add a limited number of transistors for to handle quads when actuals GPU add millions of transistors (certainly billions on the near future if this isn’t already the case) for to handle very more complex shapes like patchs ???

And I’m ok with you, first versions of DirectX/Direct3D are really really awfull … but they are really make a lot of progress at this level :slight_smile:
(personnaly, I have numerous computers with Linux, Mac OS X or Android operating systems but any computer with Windows OS, so this is isn’t because I especially appreciates Microsoft …)

But I find that OpenGL become to make the inverse slope :frowning:

This is perhaps not for nothing that a lot of persons/companies have begin to switch from OpenGL to the DirectX API …
=> I have read an article at http://www.tomshardware.com/news/john-Carmack-DirectX-OpenGL-API-Doom,12372.html where John Carmak seem to say about the same thing …

It’s really as complex that this to add a limited number of transistors for to handle quads when actuals GPU add millions of transistors (certainly billions on the near future if this isn’t already the case) for to handle very more complex shapes like patchs ???

Why bother? Patches includes quads, given appropriate shaders. So why spend hardware on non-shader-based quads? Indeed, geometry shaders include quads as well, so again, why spend the hardware on non-shader-based quads?

And I’m ok with you, first versions of DirectX/Direct3D are really really awfull … but they are really make a lot of progress at this level

No. They progressed in making their API better. They did not progress in the direction you’re talking about. D3D 5 had APIs not entirely unlike immediate mode. APIs that Microsoft removed in D3D 7. By your logic, this made the API harder to use, not easier.

D3D has gotten easier to use in the sense that it isn’t terrible. But it hasn’t gotten easier to use in the sense that it’s simple for a 10-year old. You still need to fill buffers with arrays of data to render anything. You need to use shaders. There are no quads. Etc.

Your argument along this point simply doesn’t make sense. There are many different kinds of “easier to use”, and D3D has not gone down the path you’re talking about. They made the API easier to use for actual graphics programmers, not for someone who doesn’t really know what they’re doing.

But I find that OpenGL become to make the inverse slope

That’s because you see “ease of use” as only one dimension.

No, I see “easy to use” in a 4D dimension (cf. x,y,z,t) :slight_smile:

For example, the “good old method” for to draw a rotated and 3D textured cube :



// This code was created by Jeff Molofee '99 (ported to Linux/GLUT by Richard Campbell '99)
//
// If you've found this code useful, please let me know.
//
// Visit me at www.demonews.com/hosted/nehe 
// (email Richard Campbell at [email]ulmont@bellsouth.net[/email])
//
// YLP 31/05/2011 : modify it for only display a cube 
#include <GL/glut.h>    // Header File For The GLUT Library 
#include <GL/gl.h>	// Header File For The OpenGL32 Library
#include <GL/glu.h>	// Header File For The GLu32 Library
#include <unistd.h>     // needed to sleep
#include <stdio.h>

/* ASCII code for the escape key. */
#define ESCAPE 27

/* The number of our GLUT window */
int window; 

/* rotation angle for the cube. */
float rcube = 0.0f;
float rcube_step = 0.1f;

/* Texture */
GLuint texid;
int texwidth=16;
int texheight=16;
int texdepth=16;

/*
GLubyte texData[16] =
{
	0xFF,0x00,0x00,0xFF, // red 
	0x00,0xFF,0x00,0xFF, // green
	0x00,0x00,0xFF,0xFF, // blue
	0xFF,0xFF,0xFF,0xFF  // black 
};
*/


GLubyte *texData;

void DrawCube()
{
   glColor3f(1.0f,1.0f,1.0f); 

   // draw a cube (6 quadrilaterals)
  glBegin(GL_QUADS);				// start drawing the cube.
  
	// Front Face
	glTexCoord3f(0.0f, 0.0f,1.0f); glVertex3f(-1.0f, -1.0f,  1.0f);	// Bottom Left Of The Texture and Quad
	glTexCoord3f(1.0f, 0.0f,1.0f); glVertex3f( 1.0f, -1.0f,  1.0f);	// Bottom Right Of The Texture and Quad
	glTexCoord3f(1.0f, 1.0f,1.0f); glVertex3f( 1.0f,  1.0f,  1.0f);	// Top Right Of The Texture and Quad
	glTexCoord3f(0.0f, 1.0f,1.0f); glVertex3f(-1.0f,  1.0f,  1.0f);	// Top Left Of The Texture and Quad

	// Back Face
	glTexCoord3f(0.0f, 0.0f,0.0f); glVertex3f(-1.0f, -1.0f, -1.0f);	// Bottom Right Of The Texture and Quad
	glTexCoord3f(0.0f, 1.0f,0.0f); glVertex3f(-1.0f,  1.0f, -1.0f);	// Top Right Of The Texture and Quad
	glTexCoord3f(1.0f, 1.0f,0.0f); glVertex3f( 1.0f,  1.0f, -1.0f);	// Top Left Of The Texture and Quad
	glTexCoord3f(1.0f, 0.0f,0.0f); glVertex3f( 1.0f, -1.0f, -1.0f);	// Bottom Left Of The Texture and Quad

	// Top Face
	glTexCoord3f(0.0f, 1.0f,0.0f); glVertex3f(-1.0f,  1.0f, -1.0f);	// Top Left Of The Texture and Quad
	glTexCoord3f(0.0f, 1.0f,1.0f); glVertex3f(-1.0f,  1.0f,  1.0f);	// Bottom Left Of The Texture and Quad
	glTexCoord3f(1.0f, 1.0f,1.0f); glVertex3f( 1.0f,  1.0f,  1.0f);	// Bottom Right Of The Texture and Quad
	glTexCoord3f(1.0f, 1.0f,0.0f); glVertex3f( 1.0f,  1.0f, -1.0f);	// Top Right Of The Texture and Quad

	// Bottom Face
	glTexCoord3f(0.0f, 0.0f,0.0f); glVertex3f(-1.0f, -1.0f, -1.0f);	// Top Right Of The Texture and Quad
	glTexCoord3f(1.0f, 0.0f,0.0f); glVertex3f( 1.0f, -1.0f, -1.0f);	// Top Left Of The Texture and Quad
	glTexCoord3f(1.0f, 0.0f,1.0f); glVertex3f( 1.0f, -1.0f,  1.0f);	// Bottom Left Of The Texture and Quad
	glTexCoord3f(0.0f, 0.0f,1.0f); glVertex3f(-1.0f, -1.0f,  1.0f);	// Bottom Right Of The Texture and Quad

	// Right face
	glTexCoord3f(1.0f, 0.0f,0.0f); glVertex3f( 1.0f, -1.0f, -1.0f);	// Bottom Right Of The Texture and Quad
	glTexCoord3f(1.0f, 1.0f,0.0f); glVertex3f( 1.0f,  1.0f, -1.0f);	// Top Right Of The Texture and Quad
	glTexCoord3f(1.0f, 1.0f,1.0f); glVertex3f( 1.0f,  1.0f,  1.0f);	// Top Left Of The Texture and Quad
	glTexCoord3f(1.0f, 0.0f,1.0f); glVertex3f( 1.0f, -1.0f,  1.0f);	// Bottom Left Of The Texture and Quad

	// Left Face
	glTexCoord3f(0.0f, 0.0f,0.0f); glVertex3f(-1.0f, -1.0f, -1.0f);	// Bottom Left Of The Texture and Quad
	glTexCoord3f(0.0f, 0.0f,1.0f); glVertex3f(-1.0f, -1.0f,  1.0f);	// Bottom Right Of The Texture and Quad
	glTexCoord3f(0.0f, 1.0f,1.0f); glVertex3f(-1.0f,  1.0f,  1.0f);	// Top Right Of The Texture and Quad
	glTexCoord3f(0.0f, 1.0f,0.0f); glVertex3f(-1.0f,  1.0f, -1.0f);	// Top Left Of The Texture and Quad

   glEnd();					
   // Done Drawing The Cube
}

void InitTexture()
{
   int s,t,p;
   GLubyte *ptr;


   // Create the 3D texture
   ptr=(GLubyte *) malloc(texwidth*texheight*texdepth*4);
   texData=ptr;
   printf("malloc = %x, texData = %x, size=%d 
", ptr, texData, texwidth*texheight*texdepth*4);

   for(p=0;p<texdepth;p++)
   {
        // printf("
depth=%d
", p);
	for(t=0;t<texheight;t++)
	{
		// printf("
height=%d ",t);	
		for(s=0;s<texwidth;s++)
		{
			// printf(".");
			ptr[0]=s*16;    // red
			ptr[1]=t*16;    // green
			ptr[2]=p*16;    // blue
			ptr[3]=0xFF; // alpha
			ptr+=4;
		}
	}
   }

   glEnable(GL_TEXTURE_3D);
   glGenTextures(1,&texid);
   glBindTexture(GL_TEXTURE_3D,texid); 	//Sélectionne ce n°
   glTexImage3D (
	GL_TEXTURE_3D, 	//Type : texture 3D
	0, 	//Mipmap : aucun
	GL_RGBA, 	// (red,green,blue,alpha)
	texwidth, 	// Largeur 
	texheight, 	// Hauteur
 	texdepth,	// profondeur 
	0, 	//Largeur du bord : 0
	GL_RGBA, 	//Format : RGBA
	GL_UNSIGNED_BYTE, 	//Type des couleurs
	texData 	//Addresse de l'image
   ); 
   glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
   glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
}


/* A general OpenGL initialization function.  Sets all of the initial parameters. */
void InitGL(int Width, int Height)	        // We call this right after our OpenGL window is created.
{
  glClearColor(0.0f, 0.0f, 0.0f, 0.0f);		// This Will Clear The Background Color To Black
  glClearDepth(1.0);				// Enables Clearing Of The Depth Buffer
  glDepthFunc(GL_LESS);			        // The Type Of Depth Test To Do
  glEnable(GL_DEPTH_TEST);		        // Enables Depth Testing
  glShadeModel(GL_SMOOTH);			// Enables Smooth Color Shading

  glMatrixMode(GL_PROJECTION);
  glLoadIdentity();				// Reset The Projection Matrix

  gluPerspective(45.0f,(GLfloat)Width/(GLfloat)Height,0.1f,100.0f);	// Calculate The Aspect Ratio Of The Window

  glMatrixMode(GL_MODELVIEW);

  InitTexture();
  
}

/* The function called when our window is resized (which shouldn't happen, because we're fullscreen) */
void ReSizeGLScene(int Width, int Height)
{
  if (Height==0)				// Prevent A Divide By Zero If The Window Is Too Small
    Height=1;

  glViewport(0, 0, Width, Height);		// Reset The Current Viewport And Perspective Transformation

  glMatrixMode(GL_PROJECTION);
  glLoadIdentity();

  gluPerspective(45.0f,(GLfloat)Width/(GLfloat)Height,0.1f,100.0f);
  glMatrixMode(GL_MODELVIEW);
}

/* The main drawing function. */
void DrawGLScene()
{
  glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);	// Clear The Screen And The Depth Buffer

  glLoadIdentity();				// Reset the transformation matrix.
  glTranslatef(0.0f,0.0f,-7.0f);		// Move Right 3 Units, and back into the screen 7
	
  glRotatef(rcube,1.0f,1.0f,1.0f);		// Rotate The Cube On X, Y, and Z

  DrawCube();

  rcube += rcube_step;					// Decrease The Rotation Variable For The Cube

  // swap the buffers to display, since double buffering is used.
  glutSwapBuffers();
}

/* The function called whenever a key is pressed. */
void keyPressed(unsigned char key, int x, int y) 
{
    /* avoid thrashing this call */
    usleep(100);

    /* If escape is pressed, kill everything. */
    if (key == ESCAPE) 
    { 
      /* shut down our window */
      glutDestroyWindow(window); 
      
      /* exit the program...normal termination. */
      exit(0);                   
    }
}

int main(int argc, char **argv) 
{  
  /* Initialize GLUT state - glut will take any command line arguments that pertain to it or 
     X Windows - look at its documentation at http://reality.sgi.com/mjk/spec3/spec3.html */  
  glutInit(&argc, argv);  

  /* Select type of Display mode:   
     Double buffer 
     RGBA color
     Alpha components supported 
     Depth buffered for automatic clipping */  
  glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE | GLUT_ALPHA | GLUT_DEPTH);  

  /* get a 640 x 480 window */
  glutInitWindowSize(640, 480);  

  /* the window starts at the upper left corner of the screen */
  glutInitWindowPosition(0, 0);  

  /* Open a window */  
  window = glutCreateWindow("Jeff Molofee's GL Code Tutorial ... NeHe '99");  

  /* Register the function to do all our OpenGL drawing. */
  glutDisplayFunc(&DrawGLScene);  

  /* Go fullscreen.  This is as soon as possible. */
  //glutFullScreen();

  /* Even if there are no events, redraw our gl scene. */
  glutIdleFunc(&DrawGLScene);

  /* Register the function called when our window is resized. */
  glutReshapeFunc(&ReSizeGLScene);

  /* Register the function called when the keyboard is pressed. */
  glutKeyboardFunc(&keyPressed);

  /* Initialize our window. */
  InitGL(640, 480);
  
  /* Start Event Processing Engine */  
  glutMainLoop();  

  return 0; // bug on my code corrected (cf. return 1 before)
}

And the command for to compile it is


gcc cube.cpp -o cube -lGL -lGLU -lglut

=> I think that this is a relatively simple example that can be easily studied/modified/improved by anybody that have a minimal OpenGL API experience
(for example, only change the GL_NEAREST to GL_LINEAR in the two glTexParameteri() calls for a use texture interpolations, this is very simple but the quality of the display is very better with this very little change)

=> think you that the equivalent using the new OpenGL API methods can to be as simple as this to understand for a person that isn’t particulary experimented with OpenGL ?
(if yes, please post the code for that a true comparaison can be make … it’s perhaps more simple with the new method but I have seriously some doubts about this …)

PS : see you a lot of things that are triangulars around you ?

PS2 : personnaly, I see a lot of rectangulars objets around me but any triangular object :slight_smile:

Sigh.

quads are not removed … in the GL1.x that you’re targeting. They’ll be there to stay forever, internally split by drivers to tris. So stick with them and FF.

This is really very too easy to respond it :slight_smile:

=> show us the same thing but using GL3.x or GL4.x code …

And without GL_QUADS of course because it’s the subject of this thread :slight_smile:

I think that this is a relatively simple example that can be easily studied/modified/improved by anybody that have a minimal OpenGL API experience

I think this is a common example of a terrible way to teach OpenGL and graphics programming in general.

It introduces way too many concepts all at once. It introduces:

  • [li]Perspective projection[]Vertex transformation[]Matrix-based transforms[]Textures[]Texture filtering[*]Per-fragment operations (the default texture environment is GL_MODULATE, so it’s multiplying the texture color by the per-vertex color)

Oh sure, a user can poke at this example and make things happen. Maybe even exciting things, like making the cube spin in a different direction, removing a cube face, or whatever.

But do they actually learn anything? Absolutely not. They understand nothing about how matrices work, what matrices even are (even though they’re using them), how perspective projection works, how textures work, etc.

Which means that if they poke at this program and something goes wrong, they have no idea why. All they can do is play with the example and see what happens: trial and error. I’d say a good 80% of people asking questions on these forums come from learning like this. Reading some example code online, poking at it, and not understanding what they did.

They have absolutely no knowledge of OpenGL or rendering. They have no idea how to conceive of a graphical effect in their mind, and then tell OpenGL to achieve that effect. Without that foundation of knowledge, they have no imperative, no ability to really use OpenGL.

They can make toy examples, but nothing more than that. Your example code actually damaging to users, because it makes them think they know something. They think they know how gluPerspective works. They think they know how to use glRotatef. And so forth. But they don’t. And then they come on this forum and ask why their code is behaving oddly.

If they had learned correctly the first time, then they would be fine. But they didn’t. They “learned” based on copy-and-paste coding of toy examples that have no bearing on what happens in the real world.

Teaching is not something that is an API’s responsibility. The responsibility of an API is to best serve users of the API. People who want to accomplish something. Teaching is the responsibility of external material.

think you that the equivalent using the new OpenGL API methods can to be as simple as this to understand for a person that isn’t particulary experimented with OpenGL ?

No, but I consider the question irrelevant. You don’t learn an API by experimenting with it (did you mean “experienced” by chance?); you learn it by reading proper instructional materials.

Indeed, I would say that the fact that modern OpenGL and D3D code is harder to make simple examples for is a good thing. In modern OpenGL, everything is spelled out explicitly. If you use matrices, you have to actually use them. If you want to do rotation, you need to actually do it. It’s not a magic function in some library; you must know the computer graphics theory behind it to use it. There is a lot less hidden state; shaders make it obvious exactly what is happening.

Yes, you can make a page on the Wiki. It is for the people, by the people.

What do you mean by the best method?

No, it is just pointless. The example you’ve mentioned (bezier curves, patches, etc.) as well as quads can be done using shaders. Having hundreds of entry points for doing the very same thing that can be done using shaders is not making the API easier, in fact it makes it more bloated and complex.

D3D 10 and 11 already have just shader based pipeline, they dropped all the fixed function stuff. How is OpenGL’s case is different (except that unfortunately OpenGL still has those old stuff in the compatibility profile).

Carmack said this when he has seen that OpenGL 3 did not rewrite the API but kept a lot of the old things. He said that D3D was better at that time (and he meant D3D10) because simply it provided a much simpler and powerful API that is completely based on shaders and dropped all the FF stuff, not the other way around. Actually what somewhat still makes OpenGL an inferior API compared to D3D is that we still have the old stuff in the compatibility profile and we did not get a complete rewrite of the API which makes driver developer’s life a nightmare and makes it more difficult to add new, modern functionality into the spec.

So, to sum up, your argument is pointless. First you should study what direction D3D chosen several years ago (with D3D10) and then reformulate the arguments because saying that D3D did it in the “right way” actually means that you should agree with deprecating quads, immediate mode and all the legacy stuff.

Not true. Take a look at any of the AMD open GPU register specs and you’ll see there are all the OpenGL primitive types including the deprecated ones, and some more that are not in any API I know of. I am not sure why the ARB deprecated quads and polygons, but it’s certainly not because of a lack of hardware support. BTW, using a geometry shader to emulate quads seems an overkill to me, why not just use what the hardware has dedicated circuits for?

The fact that the primitive type GPU register accepts other primitive types than those left in core GL doesn’t mean that there is native support for those primitives as the GPU has to split quads and polygons anyway to triangles in order to rasterize them as the rasterizer of GPUs supports triangles only anyway. So by using quads and polygons you are always using a suboptimal rendering path.

The fact that the primitive type GPU register accepts other primitive types than those left in core GL doesn’t mean that there is native support for those primitives as the GPU has to split quads and polygons anyway to triangles in order to rasterize them as the rasterizer of GPUs supports triangles only anyway. So by using quads and polygons you are always using a suboptimal rendering path. [/QUOTE]

Your logic doesn’t make sense. The mechanism that converts quads to triangles is the same as the one that converts triangle strips to triangles. So according to your logic, triangle strips are also suboptimal and non-native. That’s bullcrap.