glStencilFunc describes what OpenGL should do with the content of the stencil buffer. It has the signature:
void glStencilFunc(GLenum func, GLint ref, GLuint mask);
Specifies the test function. Eight symbolic constants are valid:
GL_ALWAYS. The initial value is
Specifies the reference value for the stencil test.
refis clamped to the range [0,2n−1]
, where n
is the number of bitplanes in the stencil buffer. The initial value is 0.
Specifies a mask that is ANDed with both the reference value and the stored stencil value when the test is done. The initial value is all 1’s.
I just saw the possible options for the
. The initial value isGL_ALWAYS` . And they all are boolean(sort of) operations resulting in either 1 or 0 .
If the stencil test decides to whether write or not write a certain fragment, unlike depth buffer not storing a value in range [0,1] , why is the size of the stencil buffer 8 bits and not 1 bit?
Most of the time in learnopenGL(learnopengl. com/Advanced-OpenGL/Stencil-testing) the value of
glStencilFunc's ref parameter is 1. notice that this is of type
GLint so it can be negative, if so why?
with reference to the
questionn 1 and
question 2 , if the operation is only boolean like and the value can be negative or any other 8bit feasible value(that 8 bits can accommodate), then what can other values be and why would we use them?
with reference to
question 3, and the function
glStencilOp(GLenum sfail, GLenum dpfail, GLenum dppass) if the function contained, glStenilFunc(GL_EQUAL, 1, 0xFF) and
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE), then, we’d store a
0000 0001 in the stencil buffer? Wasting 7 bits? or is it converted to:
1111 1111? if this conversion is done, then what would be the conversion for other numbers like
0101 0010(82)(which is in between
1 and 256