Use of OpenGL in Movies & Games

Is OpenGL used extensively for video games and is it used for animated movies by Hollywood Studios? What other options are there to use besides OpenGL that do the same things basically?

D3D is the other option. Some games use OpenGL some use D3D. As far as the final renders in movies OpenGL is not used afaik. More realistic rendering is needed so a ray tracing solution is what’s used pretty much I think.

-SirKnight

Originally posted by SirKnight:
D3D is the other option. Some games use OpenGL some use D3D. As far as the final renders in movies OpenGL is not used afaik. More realistic rendering is needed so a ray tracing solution is what’s used pretty much I think.
-SirKnight

With glslang (and its equivilants) watch for that to change. I’ve read the latest ATI cards accelerate the final rendering done in some 3d package (one of Maya/softimage/lightwave).

Renderman shaders are pretty much doable in modern shading languages now.

OpenGL is often used for it’s real-time rendering capabilities for movie productions. The final render is almost always done with some proprietary renderer that is far more capable than OpenGL. Renderman and Mental Ray are production quality renderers, but even then, companies like Industrail Light & Magic use their own home-grown renderer. OpenGL is used to give the artists an idea of what the final render will look light while they are working on their scenes. OpenGL is preferred for it’s cross-platform capabilities and flexibility with ultra-high-end graphics hardware.

Maybe not in this generation, but I would imagine that in the next generation of accelerators, with full looping in the fragment programs, you’ll start to see people like Pixar and Dreamworks start moving their propriatery shaders to either the 2.0 shader language or something else that is compatible with these accelerators. Hardware acceleration of shaders, with full floating-point computation and output, is the only reason they don’t already use hardware accelerated renders. Once the limits on these technologies have lifted sufficiently, you’ll see them start to move over. And, once that happens, they’re going to see their productivity, in terms of how long it takes them to do the final render of a scene, increase dramatically.

Feature films render using some higher-level software, such as Maya, or Renderman, or whatever. Often, it’ll take hours per single frame of film. However, the editing software typically uses OpenGL for (semi-)real-time preview while editing scenes.

I think the reason why OpenGL is not used for the final output is not the lack of full featured shaders, but the fact that it is a polygon renderer.

With a ray tracer you can render much more realistic scenes. For example accurate reflections/refractions are no problem, with a polygon renderer you only get an approximation. Curved surfaces and parametric objects can be rendered accurate, with a polygon renderer you have to approximate them.

Can’t wait till realtime raytracing is possible

The vast majority of rendered movies are still done with scan conversion. They may use raytracing for certain, particularly sticky situations, but they’ve built their scan converters with enough hacks to do a pretty good approximation of reality.

Does anyone know at what resolution movie makers render their CGI scenes?

Toy Story, Monster Inc, Shrek, Jimmy Neutron

What exactly do they do? Just render frame by frame and store it perhaps compressed with MPEG4.

Programmable hardware will become important for movies since it could cut production time many times over, but since certain effects are not directly supported by hardware, it won’t be enough.

I am under the impression that film output is usually in the range of 1000-2000 lines, with whatever image and pixel aspect ratio would strike your fancy.

I believe output is stored in some high-dynamic-range format, and fed to film printers with at least 12 bits of precision per component (after being filtered through exposure control).

I could also be on crack. That is a possibility, although hopefully remote!

Toy Story was done at 1536x992 (Information from http://www.sun.com/951201/cover/cover.html )

A Bug’s Life was 2048x862 (Information from http://www.dvdreview.com/html/dvd_review_-_pixar_interview.shtml )

The second article discusses how with anti-aliasing, going higher than the resolution they used produced no discernable improvement in image quality.

j

[Edited for correct hyperlinks]
[Edited again, hopefully it works this time]
[Bah, one more try]

[This message has been edited by j (edited 03-14-2003).]

The second article discusses how with anti-aliasing, going higher than the resolution they used produced no discernable improvement in image quality.

That’s hardly surprising. Since these are software rasterizers, they can have some very good implementations of antialiasing. Even better than ATi’s irregular grid method.

However, that said, I imagine that they would also be perfectly happy with using the highest AA the (ATi) card allows, rendering that at super-high resolutions, and then downsampling.

Unlucky, the links are broken

Originally posted by Overmind:
I think the reason why OpenGL is not used for the final output is not the lack of full featured shaders, but the fact that it is a polygon renderer.

I think that Pixar’s Renderman (Toy Story, Monster Inc., etc) is a scanline renderer. It just makes sure that all of its polygons are smaller than a pixel.

Well, lot of ideas are false. In CGI you must know that the rendering process is, for raytrace, an eyes space to object space system. In radiosity, we are computing the global light energy propagation… In OGL, we start from the object space and go to the image space. This is the reason why openGL is not able to render same pictures as software renderers…

While the hardware rendering are improving, software rendering is on the path of the global illumination : a renderer like Mental Ray is very, very more sofisticated than the most advanced hardware renderer, and it run in a totally different manner.

Actually, some software renderer are using OGL to do prescanning, in the first raycasting process, for example : but, mixing hardware rendering and software rendering is usefull only for pixel operations or object to image space operations. No more.

The reason why GPU will be the futur of the CGI rendering is the programmability. But we have to wait long time to see Mental Ray running on a GPU, and faster than on CPU !!!
So, we will see some renderers using hardware for “poor” but very fast rendering. But while GPU are not used to do faster raytracing, we will never see serious harware CGI rendering systems.

Renderman is an exception : he is based on scanline algorythm, extended with very specific system to render edvanced effects.

All renderers are using triangles : it’s false that the limitation of OGL is to be based on triangles only. Very specifics renderers are directly rendering parametrics surfaces : but actually, no one are searching to do such system, because ther too much difficulties.

The limits of OGL in CGI is the lighting computation model that is not global.

Gaby

raytracing isn’t very well suited to OpengGL and the like (though I did see a hardware accellerated raytracing demo a while back) but specular lighting can frequently do fine with some approximation (ie recursive local reflection mapping).
But radiosity? I wouldn’t even consider computing radiosity without hardware acceleration!
I would even consider using advanced stencil shadows as opposed to raytraced shadows.
I think that now that hardware is able to deal with high precision, high resolution formats, even without all the VPs and FPs, it could play an important role in production rendering. Granted, it may be far from doing everything but it can speed up many processes.

>>>Well, lot of ideas are false. In CGI you must know that the rendering process is, for raytrace, an eyes space to object space system. In radiosity, we are computing the global light energy propagation… In OGL, we start from the object space and go to the image space. This is the reason why openGL is not able to render same pictures as software renderers…<<<

What? Vertex transforming isnt what’s standing in the way of GL becoming a raytracer.

The lighting model and also GL being an immediate mode machine is the problem.

GL needs to store scene information, and then asked to render the scene and there lies the trouble. The huge processing time required after you submit the scene.

It’s simply not the time for this type of hardware. It would be slow, expensive, gamers don’t care, it’s a big risk for companies like nvidia and ATI, …

OpenGL is primirially used for the development tools in games and film visual effects.

One of the biggest differences between games and film is the fact that film uses composites where games use multi-pass techniques.

In film, a scene may be a composite of multiple layers: filmed original elements + digital rendered elements + hand generated digital elements.

The filmed original elements may be a background, actors on blue screen, a filmed explosion, some filmed smoke… The digital rendered elements may be additional digital bits added to the explosion, a fake building or additional traffic on the road… Then the hand generated digital elements are things like mattes and recreated background elements to cover wires, and portions of background exposed when digital additions reveal things not visible before.

The tools used to create these elements are most likely written with OpenGL. The artists may render an “animatic” in OpenGL to get a good impression of their work. However, the final rendered elements are done with a series of special case renderers, many of them scanline based. Because all these elements are composited in the end (using something like Shake- which is D3D on the PC and OpenGL on the Mac.) Each rendered digital element could use a different renderer; each renderer is written specifically to handle a subset of “reality” as fast as possible.

For example, a digital cup of coffee added to a table may use a fairly generic scanline renderer for the cup itself, a ray traced renderer for the shadow, another ray traced renderer for the cup-to-table reflections, yet another particle renderer for the steam rising off the coffee, and another renderer specifically for the coffee and its liquid physics. Each of these renderers has been highly tuned to solve it’s specific issues as fast as possible, and because each aspect of the coffee cup has been separated, each renderer is simplier (easier to maintain) and executes faster (less memory to describe the problem it is solving, smaller executable to handle the problem.)

And when it is all done, something that game makers rarely understand, the final rendered elements may pass through a digital paint station where troublesome portions are “fixed” by a painter using a program like PhotoShop (but supporting 16-bits per R, G & B elements.)

It’s is quite common to see anywhere from 40 to 80 or even 150 composite layers in a final visual effects shot. Contrast this with game renderers, that may use multiple shaders and multiple passes, but never really approaches the magnitude of complexity capable with film visual effects.

bsenftner, we are not speaking about layers, we are speaking about the rendering abilities of the future or actual hardware.

You are out of the subject.

Multi layer compositing is needed for composited film special effects, using FLAME or INFERNO or wathever. But this system are not full scene rendering system like renderman, MentalRay or the Doom engine : and in most case, special effects system are not using advanced shaders because they are done to run on basic Silicon Graphics chips !!!

Regards,

Gaby

Originally posted by gaby:

You are out of the subject.

I disagree. I thought it was a great post since I didn’t know that was the case. Thanx bsenftner!

[This message has been edited by roffe (edited 03-16-2003).]