History of Super-Sampling - when was it first used in commercial hardware?

I’m trying to track down the history of Super-Sampling, and I’m unable to find when this method was first used. I’ve seen it present in a lot of video cards from around 2004-ish and onward, but I’d think someone must have taken this “brute force” approach to FSAA before 2004 certainly.

Yes.

Are you asking about when it was first used in commercial hardware (your subject), or when it was first used for FSAA? Two very different questions.

It’s been used for FSAA on GPUs even before GPUs natively supported FSAA. (**)

It’s been used for decades more for FSAA with ray tracing.

And it’s been used in numerical methods long before that (for examples, websearch supersampling with signal processing, or sampling and reconstruction methods).

(**) For a quick blast from the pre-2000 GPU past, see this SIGGRAPH 1999 Course Notes link at sgi.com describing how to do FSAA with OpenGL without natively-supported FSAA (e.g. MSAA). This same topic was presented in courses at least as far back as SIGGRAPH 1996. Note that SGI had MSAA support at the time (and had for quite a while), but MSAA hadn’t hit the off-the-shelf PC commercial lines of GPUs yet.

I’m looking for information on specifically Super-Sampling, not MSAA. I’m sure Super-Sampling was used ages ago in software renders (for instance, 1997’s TrueSpace 3 did software rendering and offered Super-Sampling) but I suppose I’m trying to find out when it was that people were able to go buy a video card and expect that video card to do super-sampling either in drivers or in hardware. I again assume that SGI beat everyone to the punch, but I’m not sure when and I’m not sure with which product.

I had come across some related information pointing to research done in 1933 by a russian researcher who was (so I’m told) the grandfather of sampling, then sometime in 1948, Shannon’s proof brought it to the english speaking world…so the ideas and the proofs go back a ways, but I’m just trying to figure out when someone could buy a video card for their workstation that did it.

Right. I assumed that in my answer.

I’m trying to find out when it was that people were able to go buy a video card and expect that video card to do super-sampling either in drivers or in hardware.

You don’t need specific driver support for SSAA to use SSAA for FSAA with commercial hardware. That was my point with mentioning the accumulation buffer SSAA technique. You could also 2X your render target res, render, and then follow that with a simple downsample pass. This implements SSAA without SSAA or an accumulation buffer support in the driver.

Are you really just interested in when somebody like SGI, 3dfx, nVidia, or ATI implemented SSAA but put an API on top of it at the driver level?

[QUOTE=Dark Photon;1285572]
Are you really just interested in when somebody like SGI, 3dfx, nVidia, or ATI implemented SSAA but put an API on top of it at the driver level?[/QUOTE]

My situation is a weird one, and I apologize I’m not asking this question as clearly as I could. I feel like super-sampling was done at a more consumer level sometime in the mid-90s, by which I mean you wouldn’t need a rack full of SGI hardware to do it, there might have been a card that did instead. I can’t find any such card…but surely they existed, right? Thanks for your help.

You’re probably right, or close at least (I’d guess late-90s instead). I don’t know for sure myself which company/product was first. But one reference for you (predating the term “GPU”) of SSAA support provided in an early PC rasterizer card is 3dfx’s T-Buffer:

LINK

If you dig around in web archives for some of the attached links, you can find more detail on 3dfx’s T-buffer (apparently short for Tarolli Buffer). For example:

LINK

Websearch on “3dfx Tarolli”, and you can easily find more T-buffer info and that Tarolli refers to Gary Tarolli, CTO of 3dfx, and former SGI employee. For instance:

Awesome, I think that’s what I was looking for. I wouldn’t have thought to ask for “T-Buffer” but there it is! Though now I wonder if like…Intergraph or someone did something prior to that, but this is the best “data point” I have for SSAA done on the gpu. Thank you.

Sure thing. It was interesting looking back at this.

Though now I wonder if like…Intergraph or someone did something prior to that…

You got me thinking about this. :slight_smile: Earlier today, it occurred to me that back in the '92 timeframe, I worked on the SGI Personal Iris (a deskside, not a big rack-mount), and later on SGI Indigo and Octane workstations (both deskside) between 1998-2002 (see SGI hardware timeline).

Looking them up (LINK, LINK), it looks like most if not all of these had an accumulation buffer which could be used for multipass SSAA (though I didn’t actually use it then myself). That said, I don’t know if any of these platforms put an API on top of SSAA though, subsuming the downsample pass transparently in the driver (not that you need that). And beyond that, these systems weren’t exactly what you’d call easily affordable by PC standards (more like O(car) not O(PC)).

Thanks for the additional info. I guess my question goes even deeper and becomes hard to quantify after a certain point. For some reason, I associate 1980s/1990s high-end professional graphics with super-sampled AA. While there are a few examples of where this was done, I’m not able to fully trace back how I got that idea in the first place at my level, and my level was comprised entirely of Doom, Quake, Decent, and Command and Conquer back then. Not a lot of SSAA going on there…and yet I had this notion of “high-end” graphics from somewhere and I’ve been chasing that dragon ever since. I guess it must have been a lot of things, and easy to confuse since SSAA had been done in software conventionally for so long…to see it done in hardware would have been indiscernible, but somehow I thought it was done regularly that way. In your SGI hardware, even there it sounds like it’s done as a trick of clever instructions and it’s not the way the hardware works, nor even something that can be easily toggled on and off in software with driver configuration. So, like you said, I guess in a way I am looking for an instance where an API did something unique in the driver that made SSAA as accessible as a simple configuration switch to the end user, which looks like it must be with the VSA-100 chips…but can that be right? How did a consumer-grade gaming video card become the first to offer that feature /that/ way?