Stupid Noob ...

Hi all,

As a newbie to the OpenGL implementation on Win32 platform, I’d just like to know a few things that have been mulling on my mind. I’ve read upto chapter 20 of the NeHe tuts, and upto page 150 of the Red Book, damn it’s a long book! Anyway, I’m a good C and assembler programmer (not too crazy about OOP! I’m old school!), but damn new to CG. Topics like Matrices, Vectors, verteces etc still stir fear in me (long time since I did Algebra!). Anyway, to my ‘dumb ass noob’ questions.

1: OpenGL is implemented as version 1.1 on Windows right? But an updated set of my nVidia FX card’s drivers implement ‘hardware’ acceleration on this version of the API? But? The OpenGL32.dll file remains the same? So what do the drivers do exactly? The OpenGL version remains 1.1?

2: Related to the above questions, how do I implement support for OpenGL version 1.5 in Windows?

3: There seem to be some surprising ‘limitations’ to OpenGL! For example, there can be only 8 lights. I checked my XP OpenGL with the latest nVidia drivers (56.72) with glGet(GL_MAX_LIGHTS) and it returned 8. How is it possible to have more than 8 lights? Surely a good game for example may need more! Does OpenGL 1.5 support more than 8 lights?

4: What are some of the pros and cons of OpenGL vs DirectX. Surely, by this time, DirectX will have matured beyond the limited support for OpenGL on Win32 platform? Taking into consideration I’m PRO OpenGL since it’s portable etc. However, I can’t shake the feeling that DirectX might be faster? Note: I’m not in the graphics industry, I’m merely studying Graphics now as a hobby, I’m a database applications developer!

5: Since I’m a high performance application develper, implementing much of the functionality in optimized assembler (when C/C++ stops delivering), it surprises me that my simple OpenGL scene, 7 lights (probably why), 6 small spheres (using display lists, 128 slices) and 1 central big sphere (also a display list, 256 slices) starts running slowly. By slowly I mean, about 36 FPS. To clarify. I know I can’t expect miracles! And I know there are many considerations! BUT, I have a game at home where they modeled a car in 45000 polygons. Surely my small rotating sphere scene uses less than that? What I’m getting at is, I have a fairly high spec machine, P4 2.8 HT, 512MB with a GForce FX 128MB. I develop high performance code, so when I take my code to a PII, it runs well! Surely my scene isn’t pushing the boundaries of performance on my machine? Is there some higher formance OpenGL implementation I can help with? I installed Mesa for Win32, performance droped to about 2 FPS! Can I help? Would implementing my own sphere drawing algo’s make a difference or is it too complcated? Too time consuming? Or are there more interrelations that I’m not aware of?

Anyway, thanks in advance for reading my gripes!
That’s all for now!
Regards

1 & 2: Read this near the bottom and this . It’s a pain that Microsoft limits you to OpenGL 1.1, but it’s something devs have had to live with for a long time. The dll’s are updated, but the dev libs aren’t.

3: You can make unlimited lights, you just have to do lighting yourself.

4: Don’t even start this argument! It’s like Windows vs Linux. Obviously here you’ll get a biased view.

5: More details. There are lots of ways to speed up your app. One point is Mesa is software rendering only, so your Geforce isn’t being used.

Hi ffish,

The dll’s are updated, but the dev libs aren’t.
If the dll’s are updated, I have tools to create libs from dll’s! What exactly do you mean? I’ve seen reference to guys having to use some strange glwGetProcAddress call to get function pointers, how does this differ from the standard GetProcAddress? Sorry I’m not yet upto speed on OpenGL specifics. Give me time!

You can make unlimited lights, you just have to do lighting yourself.
Geez! Can you elaborate? If I have to do the lighting myself? That’s gonna mean software renering, and a crap load of computation? Seems like an incredible amnount of work!!! Does that mean for example, if I want light to fall on a texture, I’d have to load it into memory, calculate the lighting, and then add the texture to the polygon? All because OpenGL doesn’t support more than 8 lights? Seems like a crap limitation? How many light sources does DirectX support? Seems like a scene with more than 8 lights, is gonna goto software rendering anyway! Like Mesa!?!?

Has someone written a new replacement OpenGL32.dll file that implements the 1.5 spec? Is that Mesa? Mesa is written in C/C++ with only a handful of MMX optimizations, on top of that, development seems to have halted on the Win32 version!

Anyway, thanx ffish!

Also, have profiling efforts been run on the various commands? For instance, OpenGL has 33 functions to change color, in my assembler experience (without looking at source code etc.) I would say the ‘fastest’ functions to use, would be the double floating point, pointer functions? Since color calculations I assume would go through the FPU, doubles are ‘faster’ to load into the FPU than floats. I know it’s only a marginal gain, but change the color of 45000 polygons etc, and you shave/save a few clocks, esp on older PC’s. This goes for the Vertex commands as well, is using doubles and pointers a recommendation made here?

NB: These comments are made without regard to the double memory used by doubles (8 bytes) instead of floats (4 bytes). They are made in the context of ‘profiling’ efforts and ‘fastest functions’. Also in the context of a standard approach. Is the normal standard here for everyone to use floats?

Regards!

Speaking about performance, almost everything you mentionned was not relevant in the hardware accelerated 3D world.
Let me nuance this crude affirmation.
To make it quick about the ‘33’ ways to change colors : use whatever you want. However it will be processed by hardware as 4 unsigned char (GLubyte).
For coordinates, simple floats are mostly enough (GLfloat).

You need to optimize where it is actually needed, that is by understanding how rendering is performed on hardware. Read carefully the part about the OpenGL pipeline in the Red Book.

Your slow scene : you know how many triangles you have ???
(1281286+256*256)*2 = 327680 triangles !!!
That’s about ten times your game car.
About lighting : very few commercial games uses GL lighting. Well, almost never more than one or two lights at a time. The 8 lights limit is from the hardware, not OpenGL.

About that OpenGL vs. Direct3D : once upon a time (1996) Microsoft wanted to appear serious to the industry world, so released Windows NT 4 (their most Unix-like OS at that time) with OpenGL support (the ‘famous’ GL screensavers). Then, carried on that WinG crap with several directX versions, and that started to look good with dx8.
The only parts I have heard about DX8-DX9 superiority to OpenGL would be easier RenderToTexture (but not faster) and a single low-level shading language for both ATi and NVidia cards.
However, since 1996, OpenGL has still a much smaller overhead for function calls. MS will try to fix that with Longhorn. Well, about 10 years, that was time.
So about the old opengl32.dll : it is normal under window, as Microsoft owns it. It serves only as a wrapper when proper GL drivers are present. Those are implemented by card makers, ie. in the nvogl.dll for Nvidia. So for all the missing functions (above gl 1.1) you have to ask the function pointer with glwGetProcAddress. Or use an extension loader : http://glew.sourceforge.net/ .

That is less useful nowadays (as everybody prefers compatibility), but OpenGL let a graphic card manufacturer expose its brand-new technology as sonn as it wants to, through OpenGL proprietary extensions.

By the way, which one is your GeForce FX ? Not one of these 5200 right ?

Let me tell you that there are 3 frequent bottlenecks :

  • CPU to GPU communication : when you send dynamic geometry and textures etc. Not the case with display lists, of course.
  • T&L : depends on the number of triangles present on the scene, the number of hardware lights and the lighting model (finite or infinite viewer, specular, etc). That is probably the bottleneck for your test scene. Esay to check : change the rendering window size, and see if the framerate changes. If not, it is T&L limited.
  • fillrate : depends on the number of pixels rendered, the number of textures for each pixel, the type of texture filtering (aniso > trilinear> bilinear). So when you have a lot of overdraw, try to render mostly front to back, to gain from early Z test optimisations. To check if its the bottleneck, doubling the window size (XY) with divide by 4 the framerate.

I wish you good luck with your first steps in OpenGL. Don’t hesitate to post your questions here, but maybe try to be less agressive.

Re lighting, it’s not hard to do it yourself hardware accelerated especially with e.g. GLSL. Like ZbuffeR said, most people do their own lighting. A resource that’ll keep you busy for a long time is NVIDIA’s DevRel site. Check out all the pdfs and code available there.

Re OpenGL 1.5, I thought the links I posted would’ve explained it, but if not: the standard mechanism is to use an updated header file with all the relevant OpenGL extensions etc. Then get the address of the extension functions you want with glGetProcAddress at runtime. That’s the way most people here will do it. These days there are a bunch of wrapper libraries that’ll do it for you, but it’s all the same under the hood.

Hi Guys,

Thanx for the info, tips and recommendations! In the meantime, before reading this, I’ve already made a test program in assembler to document specific cards and their supported extentions (obviously driver dependant, and I know some cards use the MS drivers by default which don’t give much support beyond 1.1!) Anyway, I ran my program on some machines here, very interesting results, for me anyway. Some cards I ran the results on were some Intel GPU’s, Riva TNT2’s, SiS and S3 cards. Fascinating! Anyway, I’ve already downloaded all the nVidia and ATI supported OpenGL functionality, PDF’s, technical documents, programs (incl. Cg Toolkit) etc. Been reading up on S3TC (cool subject)! Downloading DevIl for support on S3TC (converting files to S3TC format etc.) Reading about ARB_texture_compression, ARB_fragment_program & ARB_vertex_program. These are some subjects that have caught my attention! Many topics are still way over my head, at the higher and lower levels, but I’m getting there! Still reading the Red Book and NeHe tuts! I’d like to stick to ARB extentions or EXT extentions at the least, but I find it frustrating that the hardware community is still so divided! Fighting for the leading edge. I suppose, if OpenGL doesn’t support your new ‘concept’, what else are you gonna do except create an ‘extention’. So I suppose it gives freedom to hardware manufacturers.

So yeah, I AM using an FX 5200, a 256MB 5200 Ultra at work (because I use dual-monitors for development) and a 128MB 5200 one at home, I’m not a millionaire you know! Anything higher costs a crap load more! Bessides, I’m not a gamer (much), or 3D graphics programmer! An FX 5200 was MUCH more than I needed when I got them! I would have bought a TNT2 if it were still on the market! Again, thanx for the replies, and I look forward to asking more questions when I’ve learnt more! Thanx guys!

Regards

I’m sure you had fun writing your test program, but for more info, check out this .
Lots of other stuff on this site too. Code is Delphi, but that shouldn’t matter too much. The author is a moderator on these forums.

Thanx ffish,

Jeez, talk about re-inventing the wheel! Anyway, yeah! I had fun, now it’s done! Anyway, his list is obviously much more comprehensive! At some point I’m still hoping to contribute something relevant, maybe not now, but in the weeks and months to come. Hopefully some ‘fast’ assembler routines for common tasks etc. Anyway, I’ll see … once I’ve done a few of my own demos/examples/test progies etc., I’ll maybe make a library (.lib or .dll) of my asm functions for use by others. Or at least, get good enough to help others as you do!

Regards!

Cool. Have fun. BTW, in case you haven’t found it yet, another site is flipcode. There’s a fair bit of noise on the forums, but some genuinely smart people and industry types as well. Some things that have impressed me over the years are (which your post reminded me of) Nick’s Softwire software renderer written in assembly language. Also Per Vognson (sp?) who is a mathematical genius IMO. Also a bunch of people who know C++ inside-out.

Oh also, check out some of AMD’s stuff. I haven’t visited for a long time but they used to have some good processor documentation as well as math libraries. Probably Intel as well.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.