Talk about your applications.

what, like…


glGetObjectParameteriv(hShader, GL_OBJECT_ACTIVE_UNIFORMS, &activeUniforms);
for (GLuint i=0; i<activeUniforms; ++i)
	glGetActiveUniform(hShader, i, 256, &uniNameLen, &uniSize, &uniType, uniName);

?

Cool, didn’t know that possibility existed already! Thanks, knackered, one never stops learning new things (about OpenGL).

No problem, sorry it was too late to save you from writing the stuff I assume you wrote to do the same job (which must have been a nightmare).
As an added bonus, at least on nvidia anyway, it only returns the uniforms that contribute to the output - so you don’t waste cycles updating redundant ones. Your hand-rolled code probably gets all the uniforms regardless, I would imagine.
BTW, you can also query the active attributes in a similar way.

I’m fighting with user supplied engineering data for offline simulation. Usually very high batch counts require geometry to be compiled into large VBOs. So index offset would make sense here too. I’m also doing “vertex painting” to visualize certain parameters that are computed on the CPU in real time. Updating the dirtied VBOs on each frame is tricky, because there are several ways to layout a VBO and also several ways to move data over the bus. Performance on different platforms is unpredictable, and I doubt that there’s one best way at all.

The application currently uses only GL1.5, but I’m working hard on a new abstraction layer that lets me bring the whole thing to next gen hardware while keeping the GL1.5 path as minimum requirement.

Target platform is Windows only, so I frequently find myself lurking around that other API…

CatDog

Zed do you have a chance to post details of your app ? It may be common knowledge to some of the regulars here, but I am not familiar with your work yet.

sorry Rob, me Im just here to bring some class + glamour into these forums, consider me the resident victoria beckham/paris hilton.
WRT ogl Im only mucking around with a few homebrew games, see http://www.zedzeek.com

As such I’ld like to see opengl es in drivers

pros - like I mentioned, stability + speed
cons - driver size expansion + 2 paths, but then again WRT d3d u have d3d10/9/8/7 etc in the drivers already

the specs done, nvidia+ati most likey already have es implementations

http://developer.amd.com/GPU/OPENGL/Pages/default.aspx

Hello.

I’m not exactly known to be an excessive forum poster here but I’ll go ahead and post anyway.
I’m currently working on a product called LapSim which is used by surgeons to practice their skills within the fields of Laparoscopic surgery.

During the first years (about ten years ago) when we developed the product OpenGL was the obvious graphics API to pick.

Currently our product includes several fallbacks in the graphics API:

  • glsl shaders
  • arb assembler
  • “standard” OpenGL 1.2 calls

We finally decided to remove an old (legacy?) code from our product:

  • register combiners - it’s great to see that things have improved since the early days of dot3 bumpmap implementations :wink:

Currently we have no intention to reevalute our choice of OpenGL as our graphics API but we are naturally following the progress of another competative API as well.

If we find out that there will be a great divergence between certain API features and what’s supported by IHV drivers we might have to reevaluate our choice to stay competative.

I really haven’t specifically thought of what new features we would need in an upcoming version of OpenGL but I’ll try to keep it in mind and post it here in the forums if they should arise.

I am working on a 3D Graphics Engine called Fluxions. It is still in heavy development and targeted towards game development and university research projects. It is being designed to allow developers to easily utilize new GPU technologies in a cross-platform way.

It is also designed to have a flexible graphical user interface since OpenGL has no native way to support this kind of use with the context model. I.e. you can’t use your window manager widgets on top of your OpenGL context. So I’m trying to solve that problem.

I’m also starting to tackle the problem of using OpenGL to accelerate ray-tracing techniques.

obviously not as much enthusiasm for talking about our own applications as there is for moaning about GL3.

Sadly, especially for commercial applications, i’ve posted mine and some of our problems.

I was wondering who still do that for commercial applications.

I will still monitor that topic.

Like Zed I’m just farting around with a few homebrew projects. Would like to publish something some day, but I fear I’ve bitten off more than I can chew. If I can let go my ego and lower my sights a bit, I might have a contender for the casual game market … if I can bring myself to actually finish something.

P.S. Short of a user specified packing, I’d ask for any specified packing rule in lieu of something implementation-dependent.

I was thinking about the other part of the purpose of this post: what would I need in a future spec? I think the biggest need is of course a streamlined API. I am extremely pleased with the deprecation model introduced by version 3.0. I am actually quite excited that by the release of OpenGL 3.1, we may have a spec that matches the functionality required by DirectX 10 and removes all the OpenGL 1 functionality that is no longer needed or useful.

The biggest problem with writing an application using OpenGL is that you don’t have a good idea of what is supported by hardware and what is not supported by hardware. For example, the naive approach to drawing in OpenGL is the glBegin/glEnd model. This of course is one of the worst ways to render nowadays. I think there are too many people stuck with this old view of OpenGL that if it is still in the API, it will continue to snowball its way into the next release.

This is the vision that I believe we all got excited for with Longs Peak/Mt Evans. I think that to encourage continued OpenGL development, the ARB needs to really take this principle to its core. OpenGL still has the cross-platform market and it should use this as the leverage point to bring back its viability before Microsoft decides to open the DirectX API for Apple or UNIX. I think that if somehow DirectX goes cross-platform, this will be the nail in the coffin for OpenGL.

Of course, Microsoft doesn’t need to go to other platforms because it already has market share, but it would be the killer app for the graphics community. Windows is already the de facto place to do cutting edge computer graphics with shaders and hardware, but maybe has not caught up with computing reliability. With the multi-computer/multi-processing paradigm taking the computer industry by storm, you could do processing on your mainframe, but have a Windows PC to do visualization for you.

If the rumors of the CAD community causing the OpenGL 3 release to be accepted this badly, then the ARB has not caught on that the average consumer is expecting more and that OpenGL has the power to be the API that enables consumers to visualize their data on the devices they choose with the same quality as they could with a PC.

We develop commercial and military flight simulators and databases primarily using OpenGL on Linux. So features that are important to us would probably occur immediately to you. Just meat-and-potatoes, no rocket science:

  • More efficient to-GPU geometry and texture streaming
  • Fast support for rendering large number of batches
  • Precompiled shaders
  • etc.

More on a few of these. Tricking 3GB/sec texture upload perf out of PCIx is very elusive (never gotten even close). Need some ARB/vendor direction on how precisely you should tickle GL to do this. Also, allowing developers to do something more elegent than having to prerender with all textures and shaders to “force” the driver to kick the textures and shaders onto the GPU so you don’t get frame breakage when you flip new areas into view during runtime would be nice. Some control for gradual paging at X MB/sec in the background would be much more desirable. And I’m not talking about gl*TexSubImage2D – we have that, but that only loads the data into the CPU-side driver texture or CPU-side PBO. Also, we’re experiencing ~20ms blockages on uploads every ~46MB or so with PBOs now, so vendor implementation of the current scheme is apparently not a slam dunk.

AFA precompiled shaders, compiling these things in the driver exclusively is ridiculous, unless your writing a game/sim with a pre-known set of global materials that are all known on startup. Compilations take a long time and are impractical when in the middle of simulation. In fact, we just beat our head against one vendors GLSL implementation where they recompile/reoptimize the shader “in the draw call”, consuming a whopping 66-124ms!, the first time you render with the shader after changing a uniform (this has been reported here before). So prerendering with the materials on startup (if you even know them then) doesn’t even solve the problem. When you’re trying to keep 60Hz (16.6ms frames), having GL block for 66-124ms on batch submissions doing shader recompilation is nuts! In the absense of precompiled shaders, we may flip back to Cg since it offers a lower-level way to get shaders down to the hardware and hopefully avoids this driver silliness.

Also, AFA batch rendering one of the vendors’ geometry-only display list implementations completely smokes VBOs in any form (vtx attrib formats, interleavings, etc.). Need some ARB/vendor direction on how to make VBOs as fast or faster so vendors can deprecate display lists without ticking off the user base (we’ve had quite enough of that in the past year, and I don’t want to see us lose anyone else). May involve some additional entry points. I confess not having tried GL3’s VAOs, …because they’re not offered on Linux yet.

Thanks for asking BTW. If I think of any other major features I’d like to see focus on, I’ll follow up.

Game development over here, too. (@ http://www.3d-io.com)) :slight_smile:

The list:

  • Implementing current hardware features to DX 10, DX 10.1 level.

Primarily by AMD/ATI. I can use only features implemented by both biggest IHVs.

  • In depth explanations of the fastest ways to do common things.

From each vendor, render-to-VBO, background streaming of data to the GPU and similar - for each GPU generation with expected measurements on a representative GPU model and a small benchmark application which demonstrates it. Don’t let them be “black magic” like they are now.

  • Backwards compatibility

OGL 3 is NV 8xxx+ - useless for the next year (at least for the mass market) or maybe even longer. I’d have to write two rendering backends, like i would have to do for DX 9 and DX 10, yet they would expose more features to work with?

  • Precompiled shaders

  • Fast support for rendering large number of batches

Fingers crossed for all this to happen. :slight_smile:

This post is representing my independent projects and not those of my current employer. I’m working on 2 primary projects both using OpenGL, a game and a GPGPU rapid development environment, as well as ongoing exploration of non-triangle rendering engines.

The GPGPU dev env (a project I just started) is based on the concept of using the CPU only as a co-processor, and running applications on the GPU using shaders and GPGPU techniques only. Even the programming environment/editor will be running on the GPU. Described roughly at the end of this blog post: http://www.farrarfocus.com/atom/080918.htm. My intent to apply all the research I’ve been doing on GPGPU programming, and move my entire development out of C/C++ an into this new tool (btw, you’re not the only one who thinks I am crazy). The game has been in progress for some time, and I don’t post everything on my blog. The core thing I am doing with the engine is to work with full GPU side scene structure, visibility culling, level of detail, drawing and physics, in a non-triangle based render.

I’m mostly happy with the current direction of GL, especially with the decision to incorporate most of current DX10 hardware support into the API. I can wait for the API change and don’t see myself gaining much performance from the API change as I am not bound on draw calls. Here is what I’d like to see in terms of near future changes to GL (not grouped by priority).

(*) Locally cached compiled shaders.

() A new ARB_fence as a context shareable version of NV_fence or APPLE_fence.
(
) Ability to use separate threads to build deferred command buffers.

() ARB_draw_instanced, ARB_instanced_arrays as part of core.
(
) ARB_texture_buffer_object as part of core.
(*) EXT_bindable_uniform as part of core.

() ARB_geometry_shader4 as part of core.
(
) Geometry shaders really need the GL version of DX10’s DrawAuto call to draw geometry of an unknown size that was created by the geometry shader stage without a CPU readback.

() Decouple textures and texture filtering, ability to sample from a texture with both filtered and non-filtered texture fetches. Would also like to see texel size as a texture object property, but texel format as a sampler property.
(
) Ability to set max anisotropy, DX10 supports this through D3D10_SAMPLER_DESC.MaxAnisotropy.
() A GL version of DX10’s SampleMask, glSampleCoverage() is messy to reverse engineer to be able to set an exact bit mask.
(
) Texture fetch (depth and color) from a multisampled texture.

() A GL version of DX10’s DepthBiasClamp.
(
) A GL version of DX10’s DepthClipEnable, ie DEPTH_CLAMP_NV.

(*) Ability to set max anisotropy

What do you mean with that?

What do you mean with that?

I’d say that he wants to be able to control all of the texture sampler parameters from inside the shader program itself.

I’m just saying that I want EXT_texture_filter_anisotropic to be part of the core, instead of an extension.

Second that.

Just noticed myself that texture_filter_anisotropic wasn’t rolled into 3.0, and I didn’t see it on the shortlist for 3.1 in the nvision08 slides (did see geometry shader, texture buffer object and bindable uniform buffers though).

I’m very disappointed by OpenGL 3 and even if I didn’t done anything in OpenGL this August (new job where I use Direct3D and like it actually)

I beleaved that OpenGL 1.x, OpenGL 2.x make sense and need to survive and that why the whole OpenGL Long peak and Mount Evant where such perfect ideas for me but well.

Some software, use OpenGL 1.1 and are fine with it because people using it don’t care about 3D graphics or efficiency and don’t even know how to program 3D software. I worked on such project where in a 2 millions lines of code they were OpenGL code everywere, in all binaries… It was used like the STL ou Boost… Just a nightmare to debug but it could be fine for lot of software that don’t need fancy graphics.

== I want OpenGL 1.x support. ==

For software that need good graphics but that will live for a long time, I want shaders and good OpenGL stuff. Like with D3D9 I skip myself all the old deprecated stuff, build “a programmabled engine” that work fast enought and I’m fine. It need to be cross platform so I still to the common feature, it’s hard enought to make it work with all cases.

Software could have “good graphics” even if the purpose of the software isn’t “good OpenGL graphics”. This meams that the OpenGL graphics part of the project won’t get too much money to be done because it’s not really what makes the sells. So even in this case some old code of the project will be used.

An example I build quite a modern OpenGL renderer, not amazing but ok to render the scene. However I keep the old and awful code to render the gizmo and the HUD just because I didn’t had time for this.

== I want a realiable OpenGL 2.x support ==

As an amator programer, I like good stuff, stuff that take time to get it right because it more or less new, more of less documentated. I want it cleans I wants to spead time to make it great because the value of the project is the graphics. I think this scenario apply for game development too.

I have to admit I still miss the point of OpenGL 3 … It provides features for sure but with the extension OpenGL 2 is as good.

For next release:

  • Deprecated what need to be deprecated, anyway I won’t use it for a project where the graphics is a main value of the project!

  • EXT_bindable_uniform: That’s a great stuff here!!! It changed some stuff in the renderer but really the good way! I have tested enough to figure out the improvement but there are so many calls to set the uniform variables that I pretty sure it could be significant! Deprecated uniform ?

  • Compiled shaders, programs: For a project such a game, the number of shaders could be really really huge, I espect it to significantly reduce the loading time.

  • Separate filter and image: Keep the “texture” if you want but add image and filter object. From the hardware side it make much sense, I would imply some changes for the framebuffer object too but just to make them even more consistant with the hardware.

  • Multithreading: That’s a big one, I haven’t look exactly, what OpenGL 3 improve in this side. Just a thread working just on OpenGL command and other thread doing everything else remains a very verr time consuming thread. I would like the ability to use severals threads for OpenGL. For example, I’m don’t see any good reason for shader being compile on the thread that have draw calls. Somekind of mecanisium that prepares list of calls (like display list?) on other thread and then we request this list of calls to be processed on the draw thread could give a significant benefit I think. We NEED to use all the power of a modern CPU.

  • GL_ARB_draw_instanced and GL_ARB_texture_buffer_object as core would be great too. I don’t really care about “geometry shader”, why not.

  • Make the blending stage programmable. Oki, maybe it’s poinless but of those blending equation stuff scares me, especially with multi render target. A good shader could make it as convenient as fragment shader were when they replaced “texture combiners” (still scares me!). Anyway I don’t see any operation that this blend shader could be done on the stream processor. I don’t know how it’s implemented on NVIDIA and ATI card but processing this operation in something else than stream processors seams to me to be just a waist of transistors anyway.

  • More programmability of texture fetch functions … Same this that for the “blending stuff”, I unstand that the texture addressing must stay but using stream processors for filtering … why not. I might not involved OpenGL changes for this. The best of it could be the capability to decode ourself compressed data. It’s small code and format like ETC and PVRTC provide just a better quality to pictures. They are the better solutions for offline compressed textures. For “real-time” compression I would stick to DXT compression. Why not our own compression format?

  • A more realliable system to check the feature. It still scare me when I check that the OpenGL version supported is at least 1.5, I check if the GL_ARB_vertex_buffer_object is in the list… and the pointer return for glGenBuffers is null… This is called Intel drivers !!! What about somekind of Kronos certification for drivers?

  • glGetError … that’s nice but not really efficient and after a while programming with D3D … that’s painful! I don’t know what could be done exactly maybe extend the glShaderLog for every object, I was expecting something like this with the new object model, I don’t know how it could be possible with OpenGL 2/3.

  • Buffer and texture streaming: That’s just the result of the amout of assert required today, this imply for games but also for medical, CAD, etc. softwares. Maybe it’s because I miss some knowlegde and I really have to try this GL_ARB_map_buffer_range extension. but I have no real clue on how to get the best. It’s like researsh level of something that seams to me to be or to become common.

  • A common interface for uncompressed and compresed textures … I don’t like the capability of the drivers to convert external format to internal format. Anyway, I do my best to do this on a separated thread. These two ways seam to me to be just duplicated code.

Maybe, I went beyond OpenGL idea for next release, at least it’s ideas.

/bump for thread refresh

Korval, did you want to jump in at some point ?

There’s been a lot of constructive idea generation in this thread and I am curious what you have to offer here.