OT: HL2 FSAA prob, what did they mean?

zeck –

I’m curious what you mean by the sampling theory stuff on top. My understanding is that the issue is not an aliasing problem, but an offset problem (like jitter or phase error). In multisampling, the geometry and texture sample locations are in different locations, and the discrepancy projected onto texture space can be arbitrarily large (depending on the screen slope of the primitve). So texel boundaries in your packed textures will only have limited success there. Is this true?

I am also curious about the rotated quad screenshots above, particularly the 256x256 phenom. It’s my understanding that the minimum texture coord precision is 10 bits, so 256x256 shouldn’t be a problem. Maybe it’s because texels are smaller, so in texture space the discrepancy is magnified?

I don’t know much about packing textures, but I’m wondering if you can clamp the LOD to enable mip mapping on packed textures, so long as the packed textures are on some power of 2 boundary. I guess this would probably wreck havoc on texture coordinates etc.

Miscellaneous facts about MIP-mapping:

You can use mip-mapping to guarantee no aliasing, but not using mip-mapping will not necessary guarantee aliasing. If you sample a low-frequency texture (such as a lightmap), as long as you don’t drop below that sampling frequency (which is capped by texture resolution, but otherwise independent) you will not get aliasing.

Usually, for textures that are going to be used in a variety of different resolutions, mip-mapped textures perform considerably better due to heightened sampling coherence. This doesn’t have anything to do with “texture stride” which doesn’t make sense except in the case for rectangle textures, which can’t be mip-mapped. In video memory, textures are tiled/swizzled. This is as simple as performing permutations on the address wires so it is at little cost to the hardware. It’s possible to treat frame buffers this way as well.

-Won

[This message has been edited by Won (edited 07-21-2003).]

IMO there’s one important use for packed textures. And that’s using them for keeping a tileset. Like for tiles on terrain (and font rendering of course).

Originally posted by zeckensack:
[b]Technically it shouldn’t. This is most likely a precision limitation inherent to the tex coord interpolators. That’s a different issue.

However, I’d like you to check a few things in your test app:
1)Have you tried with a mipmapped checkerboard texture?
2)What’s the texel/pixel ratio, roughly? Do you magnify or minify?
You see, GL_LINEAR is not a particularly useful minification filter

Minification will lead to texture aliasing (shimmering) either way with GL_LINEAR, regardless of AA.[/b]

Okay, I tried it again. . .

256x256 texture, looks like the one in the screenshots I posted.

glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR);
gluBuild2DMipmaps(GL_TEXTURE_2D, 3, pBitmap->sizeX, pBitmap->sizeY, GL_RGB, GL_UNSIGNED_BYTE, pBitmap->data);

I set the viewport to 512x512 pixels and set glOrtho’s parameters to twice the size of the quad, so as to render the texture with a 1:1 pixel:texel ratio.

The result was that the bleeding still occured regardless of FSAA.

[This message has been edited by Ostsol (edited 07-21-2003).]

Usually, for textures that are going to be used in a variety of different resolutions, mip-mapped textures perform considerably better due to heightened sampling coherence. This doesn’t have anything to do with “texture stride” which doesn’t make sense except in the case for rectangle textures, which can’t be mip-mapped.
I got this from an nVidia document on improving performance. By “texture stride”, I was referring to the texel/pixel slope. I’m not sure where I came up with the term “texture stride”, but I can’t seem to find it with a google search, so I probably just made it up. The reason given in the document was that mip-mapping can increase cache coherency. For example, if a graphics chip stores 8 texels per cache line, and the slope is 8 texels/pixel, you have to read in a new line for every pixel. Mip-mapping minimizes this slope. Perhaps we’re both talking about the same thing.

I found an nVidia document about maximizing texture performance in D3D that talks about the issue of storing lightmaps as subtextures and the bleeding issues that arise. It doesn’t say anything about FSAA, though.
http://developer.nvidia.com/docs/IO/1409/ATT/GDC_Texture.ppt

----glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR);----

hmmmm?

Yeah, that’s an odd MAG filter, considering mip-mapping doesn’t do anything int his case.

What happens when you offset the quad by fractional pixel amounts?

-Won

Originally posted by Won:
[b]zeck –

I’m curious what you mean by the sampling theory stuff on top. My understanding is that the issue is not an aliasing problem, but an offset problem (like jitter or phase error). In multisampling, the geometry and texture sample locations are in different locations, and the discrepancy projected onto texture space can be arbitrarily large (depending on the screen slope of the primitve). So texel boundaries in your packed textures will only have limited success there. Is this true?
That’s essentially the issue. A (multisample) subpixel ends up covering a complete pixel after downfiltering. Relative position information inside this pixel is lost. So an assumed single subpixel ends up on the pixel center, this is also the position for which the texture sample is generated. The problem is, that the actual subpixel position before downfiltering is inside the polygon edge, but its position after downsampling (and accordingly the position used for texture lookup) is outside the polygon.

Re the signal theory stuff:
If your transfer this problem into a frequency domain, the current multisampling approach is the right thing. Your screen pixels form an ordered grid, ie a constant and known maximum representable frequency (think ‘sampling rate’).
Mipmapped texture sampling is a means to use this frequency headroom to its fullest, without exceeding it (That would lead to texture aliasing aka shimmering; this is exactly the same reason why an audio A/D-converter must have a low pass filter at the analog input).

For this to work, the source material should fully use its own frequency headroom, ie the smallest texture features should be one sample in size. You should not apply a ‘smooth’ filter to textures, because that reduces frequency. This is why I made the Epic comment, they do exactly that and then ‘compensate’ the resulting bluriness with a negative lod bias. This is just plain stupid.

Back to the issue at hand, “centroid” sampling punctually reduces signal frequency at the offending edges by stretching out the source material. This deforms the sampling grid, and creates a ‘black hole’ in the frequency domain. All the edge samples will be comparatively low frequency (low contrast to their poly interior neighbours) … even though the frequency of the screen grid is not warped in any way. This cannot be right.

This most obviously compares to older movies’ bluescreen edge artifacts, when the effect is done at too low a resolution. This should cover the screen space edges.

The more interesting problem (to me, anyway) appears where two ‘centroid’ sampled edges belonging to the same object connect. A proper model would have its tex coords laid out so that the sampling would straddle this edge in a linear fashion. The transfer function ‘texture frequency’->‘screen frequency’ is not affected by the number of interior edges.

“Centroid” creates breaches in the transfer function, where part of the source material is underrepresented in the sampled result. This is also a form of aliasing.
This issue is also already known, it manifests when you render a large 2D image w several independant tile textures, where you need to take some precautions for the result to look right (texture borders). A simple GL_CLAMP_TO_EDGE will not work. Well, centroid is a glorified form of clamp to edge, so I expect the same thing to happen.

Hmm, I also wonder if all those teasers they released before that statement showed any of these problems?

I assume they just started testing FSAA and said “OOPS!, we can’t fix this in time, so lets start the blame game!” ?

Maybe they should switch back to openGL

HMMMM, NV hw has similiar problems in Splinter Cell, though I remember that with oldest drivers, when FSAA wasn’t disabled, there was no problems.

BTW, how this texture packing releats to 8x1 4x2 pipe design?

Originally posted by V-man:
[b]----glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR);----

hmmmm?[/b]

Originally posted by Won:
[b]Yeah, that’s an odd MAG filter, considering mip-mapping doesn’t do anything int his case.

What happens when you offset the quad by fractional pixel amounts?

-Won[/b]

Yeah. . . I messed up with copy-pasting and reusing old code that I must have been experimenting with and managed to screw up.

Anyway, I translated the quad by 0.1 along the x-axis (with no rotation and no FSAA) and there was some bleeding, again.

zeck –

OK. Let’s see if I first have an understanding of what is meant by “centroid sampling.” In the ATI-guy’s description does “centroid of the fragment” mean the centroid of the portion of the primitive that is contained within the pixel? How does this become a problem for menu/text? And your particular complaint is because in centroid sampling the texture sampling mesh is offset, disturbing the regular sampling?

About the plain stupidity of Epic, there may actually be a semi-intelligent reason. This is because a bilinear filter doesn’t make a very good low-pass filter, and performing the smoothing off-line essentially let you control the filtering better. From a performance standpoint, it is probably not a good idea though considering you are essentially intentionally screwing with texture caching/prefetching and eating up video memory. Stupid? Depends. Tim Sweeney is a pretty smart guy.

Also just to be more precise, ADCs don’t need to have low-pass filters; they can work with bandpass filters as well as long as you don’t exceed the total bandwidth (not the nyquist frequency) of the ADC.

-Won

If we put aside the packaging of textures to a big one and look at the screenshots Ostsol posted, the problem is more obvious.

This is not a problem for software developers to work around IMO, it should be adressed by 3D card manufacturers.

From what I understand, in MSAA modes, both nvidia and ati allowes samples outside the specified texture coordinates.
This seems pretty strange to me, why should this be needed, is it a precision problem?

On the subject of the “blame-game”, Valve is right by “blaming” ATI and NVidia IMO.
I dont think we should have to resort to always use “clamp to edge” to be safe from MSAA errors…

Just my 0.2$

Ostol: What texture coordinates are you using? Do they range from (0,0) - (1,1)? If so, it would be better to use 1/(2N) to 1 - 1/(2N) where N is the width or height of the texture. This prevents OpenGL to sample from outside the texture. Note that this is problematic with mipmaps though.

Originally posted by M/\dm/
:
[b]HMMMM, NV hw has similiar problems in Splinter Cell, though I remember that with oldest drivers, when FSAA wasn’t disabled, there was no problems.

BTW, how this texture packing releats to 8x1 4x2 pipe design?[/b]

Splinter Cell problem with FSAA is different : it comes from the fact that for applying some effects (nigh vision and stuff like that) it renders the scene in a texture, and Dx8 doesn’t support multisampled textures as a render target.

Won,
Yes, that’s centroid sampling. You take the mean position of the covered multisampling subpixels as the screen space position for texture lookup. Note that this is the final screen pixel center if all subpixels are covered (ie the polygon interior).
AFAIK centroid is supposed to be app controllable per-texture sampler state, so you’d simply turn it off for overlays.
And yes, my complaint is based on the non-uniform sampling. I guess that makes me a radical

Re Epic, I’m certain Tim Sweeney is a clever guy. I’d rather think this is an oversight somewhere in the art toolchain, maybe a custom photoshop plugin created from a ‘spoiled’ template or something. All U-engine games I’ve seen share this peculiarity.
Bilinear is not “the best” low pass, but it works. It cannot - by design - fully use the frequency headroom without violating Nyquist, but that’s not a problem as long as “when in doubt” the lower detail mipmap level is used. As far as I’m concerned this is how it has always been done.
If you want more detail, there are trilinear and anisotropic filters. These perform worse, of course, but what Epic did, was giving bilinear filtering essentially the same performance characteristics as trilinear filtering while still delivering inferior sampling quality. I don’t see how this can be regarded as clever.
If it was a conscious decision to do that, I’ll stand by my word, it’s stupid.

Given Epic’s market position among engine vendors, I feel they could at least apply a little scrutiny and recheck their older code from time to time, especially when it’s abundantly clear that performance depends on texture sizes much more than with any other engine. They must have noticed this. It’s their responsibility to investigate, plain and simple.

PS: a band pass filter can be regarded as a serial combination of low pass and high pass. It can even be constructed this way (in analog circuitry land)
A high pass in front of an ADC serves no purpose but to eliminate DC voltage, ie to center the wave on the zero level

zeck –

Amazing…an intelligent conversation.

I’m going to ask my EE friends about this sampling stuff. Here’s my thought: I don’t think that centroid sampling is that bad of an idea. Basically, it comes down to seeing how significant of a “breach” you’ll see in the transfer function. This can be done quantitativly. The thing is, the texture is already being sampled non-uniformly (albeit regularly) because of the way screen pixels lie in texture space. Texture filtering is decided per-fragment, so if you have anisotropic filtering enabled it ought to be able to do the right thing.

I disagree with your assessment of U-engine filtering. It is possible to have superior filtering with the method that Sweeney employs compared to even trilinear. This is because you can chose your filter kernel to be something better or more appropriate than a linear filter while avoiding aliasing. Like I said above, even if you undersample a texture, if the texture is pre-filtered you will not see aliasing. In the extreme case of a single color texture (which has no non-zero freq components) the sampling doesn’t matter at all. The signal (texture) bandwidth is independent of the medium’s maximum bandwidth (texture resolution). Now for performance reasons, it is clear to me that you don’t want to waste resolution so you wouldn’t do what Sweeney did. Neither would I, frankly.

I wonder how texture compression factors into this. You can use higher resolution textures and undersample them and still get higher quality filtering for certain images. Intersting.

And to make things even more off-topic:
Yes, of course I guess there needs to be a low-pass filter in there. Duh. I was trying to be clever and point out that if you have an ADC that samples at 100kHz, that 100kHz bandwidth doesn’t have to include 0Hz. You can sample from 50-150kHz, for example. Obviously, your reconstruction filter has to be the same as your anitaliasing filter. You were right; I guess I wasn’t really disagreeing with you, just adding.

-Won

Originally posted by stefan:
Ostol: What texture coordinates are you using? Do they range from (0,0) - (1,1)? If so, it would be better to use 1/(2N) to 1 - 1/(2N) where N is the width or height of the texture. This prevents OpenGL to sample from outside the texture. Note that this is problematic with mipmaps though.

The artifacts also return as the quad gets further and further away from the camera.

I’m just wondering what are the possible uses for texture packing? I know that fonts and animated sprites are some obvious uses, but what else is there?

Texture packing can also be useful for using non power of two textures without wasting all the pad space, but mainly it’s used simply to reduce state changes (when I’ve done it in the past).

w.r.t. subsample texel evaluation, it seems obviously flawed in the OpenGL implementation if I understand it correctly, “What the heck were they thinking?”. :slight_smile:

FWIW with filtering etc, I’d never fully rely on textures being packed exactly adjacent to each other anyway, because you’d experience similar problems even without antialiasing simply due to filtering. It seems that this would only really apply to nearest filters, since others would inevitably bleed in the same way, and even with nearest filters I wouldn’t do this because I wouldn’t trust the interpolation precision even if hardware did the “right thing”. It seems a bit naive to get yourself into this kind of situation in the first place.

The lightmap stuff is only loosely related and applies to filtering and as I’ve said that’s always going to bite you with or without AA.

In the situations that aren’t fully self inflicted wounds, I’d say learn your lesson and adjust your coords, current hardware will be around for a long time.

[This message has been edited by dorbie (edited 07-22-2003).]

You don’t always need a border pixel in lightmaps; you only need it if bilinear filtering at the edge of the lightmapped polygon could use a pixel not in the lightmap. Sometimes you need two pixels between adjacent lightmaps, sometimes one, and sometimes zero because of this. It also can vary from edge to edge. Not using border pixels when you don’t have to can be a significant savings.

DX had problems from its start with pixel adressing => topleft corner/center. PERIOD