blend equation on opengl reference page

hello everyone, i found some question when i did some blend test.

and i googled first then i found some blend formula they all said:

(if i set glBlendFunc(GL_SRC_ALPHA, GL_DST_ALPHA))

the sfactor and dfactor are alpha value of scr color and dst

color ((As,As,As,As) and (Ad,Ad,Ad,Ad))

but opengl reference page said they are alpha value divided by

kc, which is 2^mc-1(mc is the number of rgba bitplain). it said

GL_SRC_ALPHA is (As/kA,As/kA,As/kA,As/kA). if i have 8-bit rgba

bitplain, it makes color component very small([0,1]/(kA=255))

and the result will look too dark and GL_DST_ALPHA,

GL_ONE_MINUS_SRC etc. are in similiar case. the fact is test

result doesnt seems divided by kc. so what kc does mean??? thx

in advance.

could you us this to make sections of a bitmap transparent?

All of the “divided by kc” stuff is talking about is the transformation from normalized values to floating-point values. The colors in a “8-bit rgba bitplain” are not stored as [0, 1] floating-point values. They are stored as [0, 255] integers. The division by kc (255) is what converts them to [0, 1] floating-point values.

To be honest, the inclusion of that conversion in the blending page just makes it more confusing.

to blessman11: i haven’t try it, but i think it involves the

order of rendering of opaque and transparent objects

to Alfonse Reinheart: thx ur response! and excuse me it strikes

on another question: i have 8-bit rgba bitplain

if i use glColor4s(assume 2 byte short), the value is in

[0,65535], does opengl linearly interpolate the value to

[0,255] and finally transforms it to [0,1]?

what if i use glcolor4f, which is in [0,1], doesnt opengl

do anything? thus means i should use glColor*f then other

glColor*[b,s,i,d,…] in most case?