Originally posted by Zak McKrakem:
OpenGL has a specular, I mean bright, future.
NVIDIA has made a rock-solid commitment to OpenGL and I’ve been amazingly fortunate to participate in that commitment. NVIDIA’s OpenGL driver is the most functional, best performing, and most stable implementation of OpenGL available. Given the kind of sustained commitment NVIDIA has given OpenGL, I’m quite happy and proud to call NVIDIA my employer. What’s been accomplished is really a testament to the passion of hundreds of top-notch software engineers, hardware designers, and 3D architects here at NVIDIA. That passion permeates all aspects of NVIDIA’s product development.
Wow, I think about what RIVA 128 was seven years ago. No 32-bit color, no 32-bit depth/stencil, no 32-bit RGBA8 textures, everything was 16-bit, no sub-pixel positioning, only a small subset of OpenGL’s blend modes supported, the most basic texturing was fancy back then. But RIVA 128 was a great chip for its time with a full OpenGL Installable Client Driver (ICD) for Windows.
Now think about GeForce 6800 today. Vertex programs can access non-power-of-two floating-point textures! Fragment programs can branch on data-dependent values computed within your shader in full 32-bit floating-point! If one 6800 isn’t fast enough for you, put two in your SLI system. I’ve watched it all but still can’t help but be impressed.
In those seven years, OpenGL transformed itself from a hardware-amenable graphics state machine (with a fair amount of quirks–think color material, feedback, evaluators) into a first-class platform for programmable graphics. Yet, there’s still 100% complete API compatibility going all the way back to OpenGL 1.0, nearly fifteen years. And cross-platform support too! Think about it: While native window system APIs are frustratingly different across different systems, OpenGL rendering code can recompile and run natively and fully hardware accelerated across Windows, Mac, and Linux systems. Fully porting a sophisticated graphics user interface between Mac, Windows, and Linux systems can take several man-years, but OpenGL rendering code just recompiles (possible lesson: render your GUI in OpenGL!). State-of-the-art 3D programmable floating-point shading is more portable than trying to create a scroll bar! Think about this: when it comes to API calls, glBegin and glVertex3f are as ubiquitous today as malloc and strcpy.
And NVDIA provides access to the full GeForce 6 Series 3D feature set through OpenGL (even functionality not exposed in the other 3D API such as hardware accelerated accumulation buffers, border texels, depth clamp, depth bounds test, multisample coverage control, and stencil clear tags).
Also rather than force developers into an OpenGL-centric high-level language, NVIDIA has given you the option to pick among the OpenGL-centric OpenGL Shading Language standard, various assembly representations that expose the FULL underlying programmable hardware functionality, or Cg that allows OpenGL-based content creation applications to produce shader-based 3D content for a high-level shading language that’s not tied to OpenGL.
If there was one thing about OpenGL that I’ve been frustrated by, it is the short-sighted decision to hide programmable shading behind a single high-level hardware shading language that is overly tied into OpenGL to the point that an optimizing compiler is wedged into the driver. Yes, there are ARB-standardized assembly extensions, but NVIDIA is the only vendor exposing the latest GPU functionality in both high-level and assembly forms.
Face it: Shader programs are part-and-parcel of modern 3D content today. To render contemporary 3D content, you need geometry, textures, and… shaders. You wouldn’t base your 3D application around an image file format or 3D model format that could render ONLY with OpenGL. Anyone for the OpenGL Image Format or OpenGL Model Format. Instead, you pick API-neutral formats (TGA, JPG, whatever) for content. But shaders written in the OpenGL Shading Language aren’t neutral and shackle themselves to OpenGL.
Hey, what’s wrong with shackling content to OpenGL if people (in this forum at least) love OpenGL? It’s not just being shackled to OpenGL. It’s being shakled to a particular weight-class of OpenGL found in PCs today when that weight-class is very likely to be unsuited to exciting future 3D consumer devices.
I’m all for programmer-productive authoring of shaders in high-level shading languages (hey, I even co-authored a book about just that), but do we need to jam a compiler into the driver? I think it was a bad move (even if I did wind up reluctantly implementing it in NVIDIA’s OpenGL driver). Adding OpenGL Shading Language support bloated NVIDIA’s OpenGL driver by something shy of a megabyte. (Don’t be too surprised; that’s about the size of any good optimizing compiler implementation these days, plus you gotta thow in the standard library.)
Think about what happens if we add some new language feature to the OpenGL Shading Language. Pick your favorite C++ or Java feature. Say a feature to make shader writing more object-oriented. For example, Cg has a wonderful “interface” construct similar to what Java provides to make shader design more modular and abstract.
So you happily and productively embrace this new language feature. But wait, there’s a catch. Anyone wanting to use your GLSL shaders written with the new language feature must download the right new driver and reboot their machine (or maybe even rebuild their kernel for Linux users).
That’s a pretty big end-user burden just so you, the programmer, could use a fancy new shading language feature. And if vendor XYZ is late to release a driver with your new favorite new language feature supported, your shader just doesn’t work in the meantime.
Direct3D out-software engineered the ARB when it came to engineering a programmable shading language. Direct3D builds its shading language implementation into a redistributable library that you can package with your application. The library targets a (tokenized) assembly interface. So a new language feature (or compiler bug fix) can be utilized without necessitating end-user driver upgrades (and reboots) by just using the latest compiler library.
Cg makes this same wise engineering choice. There have been four Cg releases so far. New language features get added in without much fuss. Plus the language itself is API-neutral so your Cg shader can be used to render with the other API with few or no problems.
Still if you don’t like either GLSL or Cg, feel free to target our assembly interfaces that expose NVIDIA’s FULL programmable functionality.
I love what Michael McCool and his students at the University of Waterloo have done with their Sh library. Their meta-shading paradigm for shader construction is the kind of novel approach I want to encourage. Having a fully-functional assembly interface facilitates this.
Have I irked anyone? I hope not. If you love the OpenGL Shading Lanuage, hey, NVIDIA supports it quite well, including vertex textures and data-dependent branching. That’s stuff no one else hardware accelerates today.
Still if you want other options for programmable shading at the assembly level or with an API-neutral shading language that allows you to easily move your shader assets between OpenGL and the other API, NVIDIA has you covered too. You pick what suites your needs.
Honestly, it’s a great time to be in the midst of 3D graphics hardware technology.
I’m confident OpenGL will stay current the state-of-the-art for 3D graphics performance and functionality.
I’ve got my complaints however. For a few big OpenGL design decisions, I’ve been unhappy with the outcome. Bluntly, it’s been disadvantegous for OpenGL. But you take the good with the bad and win the battles you can. Would I offer advice for an OpenGL programme wanting to know what syntax they should use for writing shaders? At one level, I’d say pick what best meets your needs. You can be confident NVIDIA is going to support whatever choice you make, even if you decide to use the other API. But if you pressed me, I’d say author shaders for OpenGL with something API-neutral so you can reuse your shaders no matter what rendering interface you use. So I’d recommend Cg. Nobody should be surprised by that.
And yes, OpenGL has a specular future.
I hope this helps.