feature request: indexing per attribute

My last post was corrected :slight_smile:

But the reality remain unfortunately the same … :frowning:

For example, where can we find goods and completes tutorials, that work on the big majority of OS and computers, that explain really for new beginners that begin with OpenGL how to use OpenGL 2.x and that come with a lot of examples for to handle basics operations such as the rotated cube, simple texture use and others stippling things ?

@+
Yannoo

Thank to Dorbie for this wiki :slight_smile:

[http://www.opengl.org/wiki/Tutorials](http://www.opengl.org/wiki/Tutorials)

It’s clear that this was now really OS independant and that OpenGL 3.x is only one invention of me :slight_smile:

@+
Yannoo

Just because nobody likes your idea, and everyone thinks its trivial to do on your own, that doesn’t give you the right to throw a temper tantrum on other people’s ideas.

Ok, all my apologies about this involuntary temper tantrum :frowning:

But, PLEASE, note that I have already proposed this type of “indexing by attribute” since a very very long time before … for the same type of response : “no possible, make it yourself” … and I have make this myself :slight_smile:

If OpenGL is now used by a lot of persons it’s because it was in the past relatively simple to use and was really OS independant => this seem to be loose with lastest versions of OpenGL :frowning:

If we have to make all by ourself, I don’t think that this was on the interrest on a API (that in theory is make for to simplificate things and permit to not reinvent always the wheel … certainly not the contrary)

@+
Yannoo

#include <gl\gl.h>
#include <gl\glu.h>
#include <gl\glut.h>

static void redraw(void);
int main(int argc, char **argv);

int main(int argc, char **argv)
{
glutInit(&argc,argv);
glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE | GLUT_DEPTH);
glutCreateWindow(“My first GLUT program”);

glutDisplayFunc(redraw);	

glMatrixMode(GL_PROJECTION);						//hello
gluPerspective(45, //view angle
					1.0,	//aspect ratio
					10.0, //near clip
					200.0);//far clip
glMatrixMode(GL_MODELVIEW);

glutMainLoop();

return 0; 

}

static void redraw(void)
{
glClear(GL_COLOR_BUFFER_BIT);

glPushMatrix();
	glTranslatef(0,0,-100);
	glBegin(GL_TRIANGLES);
		glColor3f(1,0,0);
		glVertex3f(-30,-30,0);

		glColor3f(0,1,0);
		glVertex3f(30,-30,0);

		glColor3f(0,0,1);
		glVertex3f(-30,30,0);
	glEnd();
glPopMatrix();
glutSwapBuffers();

}

and the makefile in Linux :

gcc -o myProg myProg.c -lglut -lGL -lGLU -lX11 -lm

Normaly this was the basic method for to display a single triangle with OpenGL.
This was extremely simple and can be very easily understand by all the world.

Where can I find a thing that is equivalent but with the use of the lastest OpenGL and OpenGL ES APIs and that can work in more that one or two years ???

@+
Yannoo

Where can I find a thing that is equivalent but with the use of the lastest OpenGL and OpenGL ES APIs and that can work in more that one or two years ???

You won’t. What you wrote above was “extremely simple”, but it was also a pretty good example of how the hardware actually worked back in those days.

Nowadays, that code is nothing like how hardware works. The effort that driver developers have to spend to make that code work on a modern GPU is massive.

The modern equivalent of that code, with manually building shaders, buffer objects and VAOs, etc, may look more complicated, but it’s a lot simpler to implement. It’s closer to how the hardware sees the world, and it is much easier to make functional and fast.

Making driver developers jump through hoops to implement things that are perfectly doable by us isn’t helping make OpenGL more stable. They have plenty to do, what with exposing new functionality in new hardware, fixing driver bugs, etc. Why should they work on some minor convenience feature for us when they’re already busy making the next generation of hardware work?

If you want something simple to use for high level features, try a high level engine or scenegraph which hides the GL details and even cast shadows are easy :
http://www.openscenegraph.org/projects/osg

Thanks, it’s was exactely what I have wanted to hear :slight_smile:

This is now very more difficult for the beginner point of view because the OpenGL API was now make for simplificate the drivers devellopments (it’s the world in upside down …).

Please, why was invented OpenGL, for simplify the life of develloppers of OpenGL implementations or “only” for simplify the life of the final user that can use a very simple API and haven’t to know how work really very various hardwares that work on very various OS and is really difficult to drive directely in asm (cf. hide the complexity of the hardware) ???

So, I suggest for the next versions of OpenGL to have an API that work exclusively with hexadecimal and assembler (so without C/C++ code support of course) because it’s too hard to implement with the new hardware :slight_smile:

And I note that this was already and unfortunely beginned with the use of shaders :slight_smile:

@+
Yannoo

Please, why was invented OpenGL, for simplify the life of develloppers of OpenGL implementations or “only” for simplify the life of the final user that can use a very simple API and haven’t to know how work really very various hardwares that work on very various OS and is really difficult to drive directely in asm (cf. hide the complexity of the hardware) ???

Neither. It was made to be a hardware abstraction API that allows user code to work without changes on multiple hardware platforms.

For example, take Intel’s current line of embedded graphics cards. Their OpenGL implementations are terrible. Awful. Their Direct3D implementations are decent enough to run games on. But you can’t write any decent modern OpenGL code and expect it to run on Intel’s embedded graphics cards.

What good then is a cross-platform API if you can’t use it to write code for one of the major platforms?

So are you ok with me that an very old OpenGL program (for example a very basic 1.0 OpenGL code) have obligatory to work on a alls news opengl implementations, yes ?

If no, this new opengl implementation isn’t a true OpenGL implementation (because it can’t pass the OpenGL tests), true ?

I have really problems to understand why a decent (software only perhaps) OpenGL implementation can’t be done on the PocketPC platform for example, when this Pocket PC is certainly more fast than a old 386 where I have already can to make a lot of basics things with OpenGL on …

And don’t understand why we have to be obligatory aligned with the badest standard (that isn’t standard for others hardware/OS that itself …).

It’s because this is the major platform that it have alls the rights ???
And are you really certain that it can remain the major platform in some years or after ?

So, why the major platform doesn’t want to make as alls the others and help really for to use and upgrade a standardised API that exist since a very long time and that work on almost all the newests hardwares and OSs that are born since the first OpenGL implementation from SGI in 1992 and that is based on the IRIS GL API ???

In some words, I don’t really understand (it’s false, I think to have understand but find it very stupid, so I prefer to say myself that I’m stupid and that I understand nothing …) why a “micro” OS for personnals computers can give now the superiority and “destruct” the work that have to be hardly done by a lot of compagny for to make a professional API that work on professionnals computers, just for replace it by something that is only make initialy for handle video games, it’s all …

@+
Yannoo

So are you ok with me that an very old OpenGL program (for example a very basic 1.0 OpenGL code) have obligatory to work on a alls news opengl implementations, yes ?

I was a supporter of Longs Peak. So I would have been very happy with a clean, backwards incompatible break between old OpenGL and new OpenGL. So no, I’m not OK with drivers having to support 15+ year old software.

If no, this new opengl implementation isn’t a true OpenGL implementation (because it can’t pass the OpenGL tests), true ?

The OpenGL 3.2 specification does not require the support of old software. It has fully removed most of the ancient API cruft that makes writing drivers difficult. Practical realities make it hard for driver developers to break the link between 3.2 core and 3.2 compatibility, but perhaps this will happen in the future as the specification evolves.

And there are no “OpenGL tests.” I wish there were.

I have really problems to understand why a decent (software only perhaps) OpenGL implementation can’t be done on the PocketPC platform for example

Why do you use OpenGL? To draw semi-decent pictures at 1 frame per second? Or to make something at reasonable framerates?

If you want performance, you need dedicated hardware. A software only implementation is of no value to anyone wanting to do anything of significance on a PocketPC.

And don’t understand why we have to be obligatory aligned with the badest standard (that isn’t standard for others hardware/OS that itself …).

What is the “badest standard” you are talking about?

So, why the major platform doesn’t want to make as alls the others and help really for to use and upgrade a standardised API that exist since a very long time and that work on almost all the newests hardwares and OSs that are born since the first OpenGL implementation from SGI in 1992 and that is based on the IRIS GL API ???

Are you asking why Microsoft isn’t supporting OpenGL? Quite frankly, I don’t blame them.

Historically, the OpenGL Architectural Review Board has been terrible at exposing new functionality. OpenGL had a minor advantage in the GeForce 1-2 days. But when programmability came around, the ARB had nothing.

It wasn’t like hardware vendors were hiding the fact that there would be programmable hardware coming. Everyone know it was happening. But the ARB took a good year to come out with ARB_vertex_program. A year after Direct3D had vertex programmability. If you are a game developer, why would you make an OpenGL game that couldn’t even use this powerful new hardware, rather than a Direct3D game that could?

OpenGL has been behind the curve ever since. ARB_vertex_buffer_object came out long after D3D had solved that problem. ARB_framebuffer_object came out long after equivalent D3D functionality. GLSL appeared 2 years after D3D had shader functionality.

And here’s the thing: most of the OpenGL solutions have these really annoying stupidities with them. Buffer objects have their ill-defined usage hints. Framebuffer objects have this stupid hack where a driver can refuse an FBO for no particular reason. The myriad of problems centered around GLSL would take me far too long to explain.

That’s not to say that D3D doesn’t have API annoyances. But most of OpenGL’s problems are born out of things that sounded like a good idea at the time, yet now can never be changed. Because breaking backwards compatibility is a big no-no in OpenGL land. Whereas D3D can do it whenever it is appropriate. This allows them the freedom to correct mistakes.

Game developers need a graphics API that can evolve, one that can improve with the times. An API that, when new hardware comes out, will immediately provide them with access to it. OpenGL couldn’t do that. Not cross-platform; NVIDIA has always done a good job with coming up with strong extensions in a timely fashion. But they’re not cross-platform.

And you may say, “Who cares about what game developers want?” Well, game developers create the vast majority of software for graphics. They drive most graphics software. They force Max/Maya/etc to use whatever API that they use. And since that API is getting good performance, CAD houses start looking into using it.

But ok, it’s true that I’m very stupid and that I was a very bad guy that can only emit critics about an API that work very well since a long time on alls hardware and alls OSs and remain always and for a long time a very very wonderfull API :slight_smile:

So very long life to the OpenGL API and alls my apologies :slight_smile:

@+
Yannoo

Alphonse,

Only for your information (two seconds for to find it on the net) :

http://blogs.intel.com/research/2008/02/real_time_raytracing_in_your_p.php

@+
Yannoo

I wish the troll would use google-translate :slight_smile:

See, ARB stumbled a few years ago. The industry couldn’t wait. Khronos took over the desktop GL, announced promises. Mobile devices with 3D acceleration were a niche. WinCE-based devices come with DX, but too few devices have gpus - and too few users have those devices, thus PocketPC IHVs don’t bother providing another API. ARM cpus are way too horrible for software-rasterization, mobile games stay 2D, so why bother. Nokia/Symbian/Google/Apple/etc stick with GLES. iPhone has a dedicated gpu, fixed screen-size and non-interpreted code, GLES there is extremely successful. Other devices have botched-up GLES implementations, random screen-sizes, random lack of gpu, random input -> GLES there is not successful. PSP, DS are fixed-hardware, have their own low-level API.
Khronos stumble with GL3.0, Microsoft stumble with Vista and DX10 exclusivity. Gamers move to consoles: x360 is cheap and has games, uses modified DX, DX9 on PC prospers with console-ports, more PC developers move to consoles. PS3 uses its own low-level API and Cg; Wii uses its own LL Gamecube API. Other users move to Mac and Linux (desktop GL), but are not graphics-users anyway.
Intel sees the high-performance computing-threshold has been passed for 99% of users, step into gpu-land by reintroducing their ancient silicon (previously known as “the first graphics decelerator”) as mobo-integrated; bribing, overpricing and threatening around to gain a huge share by suckering customers (“Vista certified” scandal). Shop around, you generally cannot find a mobo without an integrated GMA. Customers find-out they can’t use 3D mode at all, either via DX or GL. “Vista’s fault”. Intel start to veery slowly fix their horrid drivers to support what the majority of customers use: several Steam games, that use DX. Linux enthusiasts write rock-solid IntelGMA-OpenGL drivers, and so does Apple.
Meanwhile AMD has enough troubles for a few years, and thus don’t invest heavily on desktop-GL, for their DX troubles are already enough to handle. AMD bounce-back and start tackling GL3.x seriously. nVidia insist on cushioning the transition of GL1.x->GL2.x->GL3.2, plus keeping an edge in driver-reliability, thus push for compatibility-profiles. GL3.2 spec looks really great.
Vista becomes stable. Win7 gets released with HW GDI acceleration returned to core, lower RAM footprint and improved filesystem performance - joy for everyone. x360 starts to lose some ground, with no improvement on the horizon. Many PC users still stick to WinXP, but it’s a matter of time till they upgrade. Desktop-GL soon might have little relevance, if it wasn’t for the extensions and some cpu-relieving features in this progressively cpu-bound world. GLES starts to gain serious ground after iPhone, G1 (smirk) and iPhone3GS started feeding 3D-vis developers constantly. Around the iPhone, OSX lures developers.

DX provides the shader-compilers
GL requires shader-compilers, but allows extensions, to let developers optimize paths.

DX11 provides multithreading
GL is already multithreading perfectly on nV cards, transparently to the developer.

DX provides shader-bins
GL might get this soon.

So, this enough history for you?
Now, look-up what DX9/10/10.1/11 are, what a gpu is, what an API is, how different graphics APIs try to aid in visualization and what they expose from the HW. You may understand why GL3.2 and DXx are the way they are, and why no-one (who can be bothered to study/read) whines about shaders, programmable-functionality and learning-curves. Or lack of features that you can easily add/implement in programmable-functionality snicker.

Overall, GL is becoming a good valid choice. Laptops with older Intel GMA will remain with their problem, and remain used for the sole purpose they were bought for: email. “New OS is buggy” can no longer hide inaptitude of drivers and HW, so natural selection will have its word.

Too much troll feed, sorry. I vote for clean-up after he snacks.

But ok, it’s true that I’m very stupid and that I was a very bad guy that can only emit critics about an API that work very well since a long time on alls hardware and alls OSs and remain always and for a long time a very very wonderfull API

None of what you said about OpenGL is true. It does not “work very well” on “alls hardware” or “alls OSs”. There is a lot of hardware and OSs that use OpenGL ES rather than regular OpenGL. That’s because ES is not backwards compatible with regular GL; it was designed specifically for those platforms and removed a lot of the cruft in OpenGL.

And OpenGL hasn’t been a “very very wonderfull API” since programmable hardware came out.

Your rose-colored view of OpenGL is not reality. That is why your positions make no sense.

Gamers move to consoles: x360 is cheap and has games

Small point: you’re a half-decade late for that move. Gamers moved to consoles long before the 360. The PC gaming industry hasn’t been driving the overall gaming industry for a good decade or so.

And the PS3 doesn’t exist ???

Are you very certain that this console use a D3D driver ???

I haven’t never a day hear about something with a “Folding at Home” that work with a Xbox 360 :slight_smile:

@+
Yannoo

And how can you see a blue ray with a XBox 360 please ?

@+
Yannoo

I prefer to stop the discussion because you are really on a very bad faith …

@+
Yannoo

>_< I admit I kind of expected a more coherent reply.
I have 2 PS3s, PS2, PSP, PSX… And I have written homebrew-style code on each. How you got the idea that I’m an x360 fanboy, I don’t know. No-one on the PS3 uses the modified GLES2 lib that Sony provides, so it’s not relevant to this hi-jacked thread (which btw turns into an IRC chat. Take a breath, reply in a solid and coherent manner, please).

@Reinheart
s/Gamers move to consoles/Most of the remaining PC gamers move to consoles; There, corrected :slight_smile: