How to interrogate 3D card driver via OpenGL ?

Zen, I totally agree with you. (Although I don’t know a lot of game with understandable settings, even for an OpenGL programmer. exemple: IL2 “Render terrain using triangle” option, 4 simple words but totally meaningless :P)

Quake’s way is a good way. It adds more flexibility in the control of the engine.
Agreed a 3D engine should provide an easy, understandable and flexible way to configure it.

But it wasn’t the point of my previous post.
You shouldn’t rely on the user settings to solve the problems related with hardware features.
Quake is able to find out how to run on nearly any system, without the help of users configuring it from A to Z. IF you meet the system requirement.

IMO the engine shouldn’t stop the user from increasing the image quality, but like with the CPUID example I’m talking about increasing the minimum requirement instead, it shouldn’t bother supporting “nearly S3 Virge speed like” 3D hardware when it’s meant to be actually run on a GeForce <put very high number here>.
In this topic, if for example you take a GeForce 256 as the min. sys. req., then you can already know all the features that are supported by it under OpenGL 1.3, no need to bother about their existence or not. (Extensions are another matter, but core features of the API are unlikely going to disappear)

Users settings should allow the control of “not so common” and advanced features (example: in the latest OpenGL games you can chose between Vertex Shaders or normal T&L)
But giving choices like “Enabling/Disabling Textures”, “8bits color”, etc… is a bit extreme and not needed. (Apart from being funny for the 0.01% people wondering how ugly the game could look with those settings)

[This message has been edited by GPSnoopy (edited 03-19-2002).]

I’m currently using a simple interpreter that’s integrated with the console.You can export any variable or function in your source code to its symbol table(a ptr to it actually) so you can use it to change the value of any variable and call any function(with arg passing) in the program(or shared object files,like dlls in windows).It’s been very helpfull until now.I can control all aspects of the engine by just exporting the variables I pass to opengl(more or less).I haven’t thought much about it because I’m not a proffesional so supporting many platforms is not an important goal yet,but you could easily write a simple script for each card which would set the correct values w/o any need for benchmarking etc.This way adding support(rather optimizing) for a specific car should be a matter of minutes.
I agree on all your other points.

BTW: this way even adding support for “enabling/disabling textures” should be a matter of minutes,so that 0.01% can have their fun.In fact that’s a cool idea.I’m going to implement that.Thanks.

[This message has been edited by zen (edited 03-19-2002).]

I also do the same with some “key” variables in the engine, very helpfull indeed!

Originally posted by zen:

BTW: this way even adding support for “enabling/disabling textures” should be a matter of minutes,so that 0.01% can have their fun.In fact that’s a cool idea.I’m going to implement that.Thanks.

LOL! You’re crazy

[This message has been edited by GPSnoopy (edited 03-19-2002).]

Crazy?Why?Say your engine uses a lot fo textures which take time to load,create mipmaps,light,whatever.Now you want to fix a bug/add functionality to say your ROAM optimizer so you’re only interested in the triangles and you don’t need the textures.Just add:
set r_wire=1;
set r_textures=0;/not implemented yet/
to the init script and you don’t have to wait for the textures to load.This is going to make bughunting a little easier on your nerves.

[edit]
I implemented it.It took about 5 minutes.Actually I just disable textures for the terrain which take a lot time to load/illuminate.No need to fiddle with glEnables/glDisables as some textures still need to be loaded(font textures for the console etc).Hmmm,maybe you’re right and I am crazy.After all it is 5:16AM!

[This message has been edited by zen (edited 03-19-2002).]

Originally posted by MPech:
I’m looking for a way to interrogate my 3D card driver about its hardware functionalities, like T&L or mipmap for
functions like

Hi

I thought a while ago about directly accessing 3D Hardware using for example low level assembly language to do it in a similar way as the driver does it. I think it would be nice to do it , but for real life applications it wouldn’t be worth of implementing it because there are so many 3d graphics chips and configurations. But perhaps if you restrict to e.g. nvida geforce* cards it could be done.

I searched the web for some hints how to do it but the results weren’t very helpful.

But I think it should be possible because RivaTuner shows quite detailed low level chip informations about nvida gpu based chips.

Perhaps someone has some hints where to look for further information about accessing graphic chips as a driver it would do.

Bye
ScottManDeath

[This message has been edited by ScottManDeath (edited 03-20-2002).]