Standardisation across different GFX cards

Ive got myself a Radeon 9800 Pro to replace my old GeForce 5200, anyway in my main development project (a space combat/exploration sim) I draw the sun and lense flares with 2d textured quads and blending enabled which worked fine on the old card but on the ATI I can see the part of the square that should be transparent, i.e. the circle which I want to see and a dim square around it. I’ll post code if needed but my main concern is that I thought opengl was more or less standardised across different cards. I realise that they are different manufacturers and drivers etc but Im not using any fancy extensions or anything just basic blending and textured quads and theres a big difference in what each card produces. Please help, I’ve lost a little faith in open gl!!!

You got your answer at gamedev. :slight_smile: But on another note NVIDIA and ATI are free to implement how they want to go about add in the OpenGL features and this sometimes causes problems. E.g GL_RGBA on ATI isn’t 8bit per channel but 4 if I remember right where on Nvidia is’t default to 8. I had to explicitly set my GL_RGBA8 to get my ATI card to look as good…

LOL you got me there!! I posted here as well cos I was annoyed at my percieved lack of standardisation across gl implementations/drivers and thought Id take it to the source at opengl.org!! I’m going to post code and a picture on gamedev so people have a better idea of what Im on about. Anyone else here following this thread can check out my post in the gamedev opengl forums.

Originally posted by Mars_9999

E.g GL_RGBA on ATI isn’t 8bit per channel but 4 if I remember right where on Nvidia is’t default to 8

Whatever gave you that impression! It would surely be 8-bpp if the type parameter to glTexImage*(…) is GL_UNSIGNED_BYTE.

@comedypedro: Here are a couple of solutions for you to try:

  1. Disable alpha testing, if its enabled.
  2. Try setting wrapping mode GL_CLAMP_TO_EDGE.

It would surely be 8-bpp if the type parameter to glTexImage*(…) is GL_UNSIGNED_BYTE.
Wrong, he means the internalFormat parameter.
GL_RGBA is from OpenGL 1.0 times and is defined as 4, meaning four components, but says nothing about the precision.
That is why graphics drivers are free use what they like.
That’s why there is a control panel entry in NVIDIA’s drivers which lets you select if such textures should be downloaded in the monitor’s current color depth, always true color or always highcolor.
You want to use the newer enums like GL_RGBA8 for the internalformat because of this stoneaged define.

I always use GL_RGB8 or GL_RGBA8, otherwise OpenGL might give me a crappy format,
which will turn my smooth gradient textures into banded lines.

Oringinally posted by Relic

Wrong, he means the internalFormat parameter.
GL_RGBA is from OpenGL 1.0 times and is defined as 4, meaning four components, but says nothing about the precision.
That is why graphics drivers are free use what they like.
That’s why there is a control panel entry in NVIDIA’s drivers which lets you select if such textures should be downloaded in the monitor’s current color depth, always true color or always highcolor.
You want to use the newer enums like GL_RGBA8 for the internalformat because of this stoneaged define.

Hmmmm! Sounds weird to me. An internalFormat of GL_RGBA with type GL_UNSIGNED_BYTE should give me a 8-bpp RGBA texture.
So, if the hardware decides to use 4-bpp then how will it interpret the uploaded texture data (in TexImage(…))? 4-bpp or 8-bpp?
Forgive my ignorance (if any!), but i have never encountered such a problem on any hardware that i have done development on. And i am not just talking about the internalFormat alone, sure that just tells the number of components, but that’s where the type parameter comes in and tells the precision.

This issue just won’t DIE!

(link from 2000)
http://www.opengl.org/discussion_boards/cgi_directory/ultimatebb.cgi?ubb=get_topic;f=3;t=000378

The third parameter is the internal format, if you use a generic GL_RGBA you are telling the driver that you just want 4 components, and don’t care a about precision. (so it is free to make up it’s own mind)

but that’s where the type parameter comes in and tells the precision.
And this it where you use a wrong assumption.
The type parameter only specifies the user input data and is used to decide which conversion needs to be applied when generating the texture of the internalFormat type. (In OpenGL 1.0 that parameter was named components.)
You don’t want an internalFormat of GL_RGBA to generate a floating point texture if you downloaded user data from GL_FLOAT type, do you?
Get rid of this old cruft and use “modern” internalFormat enums. They existed five years ago as seen in the other thread.

Okay okay, i get it :slight_smile: .

Thanks for all the replys folks, if any ones interested or having the same problems then check out my thread on GameDev, its got links to pictures that show the problem and the relevant code.

http://www.gamedev.net/community/forums/topic.asp?topic_id=362247

My problem now is getting MSVC 2005 to compile, I stupidly uninstalled trusty ole VC6…