What's the glextension string for T&L???

Hey all,

I’ve printed out a list of the supported OpenGL extensions that my card supports but I’m not sure which one of them is the T&L extension

Here’s the list:
GL_ARB_multitexture GL_ARB_texture_compression GL_ARB_texture_cube_map GL_ARB_texture_env_add GL_ARB_transpose_matrix GL_EXT_abgr GL_EXT_bgra GL_EXT_blend_color GL_EXT_blend_minmax GL_EXT_blend_subtract GL_EXT_compiled_vertex_array GL_EXT_fog_coord GL_EXT_packed_pixels GL_EXT_paletted_texture GL_EXT_point_parameters GL_EXT_rescale_normal GL_EXT_secondary_color GL_EXT_separate_specular_color GL_EXT_shared_texture_palette GL_EXT_stencil_wrap GL_EXT_texture_compression_s3tc GL_EXT_texture_edge_clamp GL_EXT_texture_env_add GL_EXT_texture_env_combine GL_EXT_texture_cube_map GL_EXT_texture_filter_anisotropic GL_EXT_texture_lod GL_EXT_texture_lod_bias GL_EXT_texture_object GL_EXT_vertex_array GL_EXT_vertex_weighting GL_KTX_buffer_region GL_NV_blend_square GL_NV_fog_distance GL_NV_light_max_exponent GL_NV_register_combiners GL_NV_texgen_emboss GL_NV_texgen_reflection GL_NV_texture_env_combine4 GL_NV_vertex_array_range GL_S3_s3tc GL_SGIS_multitexture GL_SGIS_texture_lod GL_WIN_swap_hint WGL_EXT_swap_control

And how do you enable T&L after I have found out that a video card supports it? I can’t seem to be finding any doc’s about this. (prolly because I don’t know what the T&L extension name is )
Thanx in advance!

It’s not an extension, if your card supports it then it supports it!

The only card so far with T&L are the Geforce,GeforceMX, Geforce 2, Radeon.

if you have one of those cards, you have t&l. But you have to make sure you are using an accelerated format. Just look in the developper section of each of the vender(Nvidia, Ati), there will be documentation for that.

The only consumer level cards so far with T&L are the Geforce,GeforceMX, Geforce 2, Radeon.

And a whole lotta bunch of professional workstation boards (some HP Visualize fx, Wildcats, GMX2000, lots of SGI boxes, …)

Wait…let me get this straight…I think I’m misunderstanding something…

So what you guys are saying is that if I write an app in OpenGL,then my app automatically supports T&L? I don’t have to make a call or anything in my program to switch it on or something?

yes !!!1

But if you do all your transformation by yourself, the benefit will be veeeeerrrrry small

But what about optimisation. That is a little different on a non tnl card. Not to mention the polygon count. You can not do a renderer name check if that is nvidia’s stuff. I am sure there will be more tnl capable cards.

There is no way to figure out tnl.(?)
Anybody?

[This message has been edited by mandroka (edited 08-02-2000).]

>>There is no way to figure out tnl.(?)<<

Not really.
The answer to your problem is: Bench it! You’ll never know beforehand if one card can do something fast without measuring it.
Even if there was something like IID_IDirect3DTnLHalDevice (Sorry, for that one, learned that last week ) in OpenGL you’ll end up with different performance on different board and systems.

ok…but now I’m totally confused.Why do some OpenGL games say on their boxes that they support T&L while others don’t?

Hi there.

If you really want to support T&L, you have to take care about which data type you use when sending data to the GPU. For example, on GeForce series, you will be T&L-enabled if you send your vertices as floats. Sending them as double will not use the T&L engine.

This is true for almost all Arrays : look at nVidia site to see which data type must be used for each Array. I have just read the updated version and was surprised to see that the texture coordinates must use short integers (not floats !). I hope this is a mistake in their documentation…

Regards.

Eric

Hmm…thanx for pointing that out Eric…I’ll take a look at that documentation right now.

>>If you really want to support T&L, you have to take care about which data type you use when sending data to the GPU. For example, on GeForce series, you will be T&L-enabled if you send your vertices as floats. Sending them as double will not use the T&L engine.<<

That’s too general.
The above statement would also mean glVertex3f is T&L and glVertex3d is not. That’s simply not true.

Without having read the document, either you’re talking about a special vertex array functionality (vertex array range etc.) or there might be some paths in the vertex arrays implementation which are not that optimized.
So what I expect to happen is that some paths can read the data straight into the chip or even download it to the board and others have to convert the data to the suitable format before.
But you’ll end up in the GPU with the vertices anyway.

Games running on Direct3D have the possibility to query the TnLHalDevice or the HALDevice in DirectX7.
Under OpenGL you could do all your matrix stuff and lighting effects yourself and use OpenGL as a rasterizer.
Aditionally OpenGL games don’t use the L in T&L but do their own vertex lighting or use lightmaps.
So the labels on the box hopefully reflect the method used in the game engine underneath and are faster in a T&L board.

Saying OpenGL automatically supports hardware T&L is correct but what I think you really want to know is how to get the most out of HW T&L. You need to use the NV_VERTEX_ARRAY_RANGE & NV_fence extensions to remove the possibility of a bus bottleneck. A high geometry app on a PIII 700 has almost double the triangle throughput when using those extensions.

Check out the new whitepaper from NVIDIA http://www.nvidia.com/Marketing/Developer/DevRel.nsf/WhatsNewFrame?OpenPage

Go to nVidia’s site. Read the developer whitepapers on how to get performance out of a GeForce. Choose one of the top three formats they recommend for dealing with vertexes.

Go to ATIs site. Read the developer whitepapers on how to get performance out of a Radeyon. Choose one of the top three formats they recommend for dealing with vertexes.

If you’re really lucky, you’ll be able to use the same formats and ways of drawing primitives. More likely, you won’t, and will have to code two paths; one for GeForce and one for ATI.

You may want to look for the specific extensions you want to use from the GeForce and from the Radeyon, and switch on that extension’s availability rather than the actual manufacturer/model, in an attempt at maybe supporting HT&L on the next generation of cards, too. Good luck testing that, though :frowning:

Then, you will realize that relying on the driver to transform will suck major @$$ when the card cannot do HT&L, because most driver writers can’t go through the effort of squeezing all the oomph out of a PIII, PII, Pentium/MMX, K6-2, K6-3 and Athlon (each of which requires their own hand-coded assembly implementations, pretty much). Thus, you have to write your third, fourth, fifth, sixth, seventh and eighth versions of your rendering engine doing your own translation for those CPUs, and run-time detect which version to use.

If you’re not 110% behind your effort to squeeze all possible performance out of your engine, though, you can probably get by with one C-based manual transform path, and one let-GL-do-it path, and let the user choose which one to use. That might get you 75% there or so.

PS: GL_NV_vertex_array_range is one of the mechanisms nVidia talk about in their GeForce performance whitepaper.

well, this is a little bit confusing.
You can not tell even if you are using tnl. You can not figure out (run time) the optimal format for vertex data.

Not to mention, that there is no way to correctly detect the card you are using (thru opengl).

I think in the future of gl gaming, there must be a general extension, or function to see such things. I mean who cares if a high end 3d program runs relatively slow. But if my game slows down, well thats a real disaster.

[This message has been edited by mandroka (edited 08-04-2000).]

>>I mean who cares if a high end 3d program runs relatively slow. <<

Perhaps your management, if you miss the deadline.

Well Relic, you made a point…

I didn’t want to mean that glVertex3f was accelerated while glVertex3d was not… What I meant is that you have to use float to maximize your T&L use… My app ran more than 100% faster when I switched from doubles to floats (yes 100%…).

Sorry.

Eric

as i understand it, t&l is a function of the drivers. your geforce drivers, knowing that you have on-board t&l (precisely because they are for device level control) ship whatever transform and lighting calcs they know the card can handle out to the 3d hardware.

on a card and with drivers without onboard t&l, the driver performs the calculations before sending data to the card, thus using the cpu.

as far as games claiming to support onboard t&l, that’s essentially consumer fud factor, because the game needn’t do much to take advantage of it. and all drivers that support opengl must, of course, support glTransform*() and at least 8 light sources.

on the other hand, you will get dramatic improvements on lit scenes, etc, especially for the optimized case, which in the case of nvidia’s newer hardware is one directional light with an infinite camera.