l have been trying to get the concept of super sampling right
as far as l know a pixel can only be made out of 3 colors
which is 3 beams , super sampling says that it redraw the image
in a higher resolution and scale it back !!
but wouldn’t it be the same !! and is it possible to divide
a beam of color into slices to get that small area inside of the
pixel with a different color ?!
l am trying to show what l am trying to understand in the following pic .
plz lf u have an explanation l really apprecite it .
it has been very confusing for me for many days and l sitll don’t get the sense . [/img]
theoretically it might be possible if the driver did support such a thing, and to be honest the cleartypefont is done this way.
The problem is that there is no way to predict the color pattern.
but anyway if you wanted to do such a thing you would first do a 1x3 or 3x3 supersampling (that is 3x3 times the screen resolution).
Then you overlay alternating red green and blue lines.
Finally you re sample it to original screen resolution effectively putting a red, green and blue line on each pixel.
Though i don’t think it would be that beneficial, as you may get some colored fringes.
Supersampling is done in memory, (i.e) 4x4 pixels get merged into one (though usually a blur kernel is used, afaik), then uploaded to the monitor. Games/graphics don’t care about the 3 subpixels’ layout.
Cleartype degrades color in order to present smoother monochrome text onscreen. Subsampling non-monochrome stuff seems plain wrong to me.
Cleartype does the supersampling internally, in small RAM-based buffers, while supersampling works on the whole (usually screen-sized) VRAM-based framebuffer.
P.S. There are many grid-layouts, and users must explicitly download a tool to find manually and let the OS know what grid-type your monitor has. Most people don’t tune it.
Personally I hate cleartype because of the rainbow-like appearance. And said by Ilian, it only works on monochrome images.
thanks guys for your replies but this is not exactly what l need to understand l think l didn’t make my self clear .
all the 3D rendering engines have the option of super sampling
which says when l read it that it subdivides a pixel !!
so subdividing a pixel is possible !! obviously no , then
what exactly happens ?? according to what l read that it draws
the image 2x , 3x … the resolution then scale it back .
l simulated that in photoshop shop as u can see assuming that this is my monitor , and as u see it didn’t make sense for me as it shows in the picture that l have uploaded the new dimensions of the pictures don’t fit into the 3 colors beams of the screen .
its pretty confusing for me what is the point of taking 3x or more the resolution lf at the its going to be scaled back to the same pixels !! and get the same anti-aliasing .
l hope u guys see what l mean but for rough and solid understanding this is all very confusing for me
haaaa … ok.
Look at this :
The idea is to take several samples, then average the results for each displayed pixel. Nothing to do with the RGB grid
What we really need is a new display technology. Let’s do away with these pesky pixels once and for all.
lol what did pixels ever do to you . Vector stuff is imprecise, slow, back-to-analog.
Now if only we had displays as cheap as LCD, that don’t skew colors if you move sideways a bit (change viewing angle)… Something like 26" plasma, or those OLED.
Hey, everyone said Dr. Madrantur Moonmeld of the now defunct Mindberry institute was crazy when he said he could have his tukus tickled with a feather for 3 days straight, but he pulled it off without so much as a whimper.
Heck there’s already a theoretical foundation for creating a universe in a lab. A more direct form of or representation for rendering shouldn’t be too much of a stretch. Think supersymmetry, not supersampling