public void InitializeOnHardware()
{
if(bmp!=null)
{
int tID = 0;
bmp.InitializeOnHardware(ref tID);
renderTarget = GetRenderTarget();
if(renderTarget == Gl.GL_TEXTURE_RECTANGLE_NV [b](or :D )[/b] renderTarget == Gl.GL_TEXTURE_RECTANGLE_EXT)
{
textureMatrix = Matrix.Identity;
textureMatrix.Scale = new Vector(this.bmp.Width, this.bmp.Height, 1);
}
textureID = bmp.TextureID;
}
}
And finally, when binding the texture:
public virtual void SetTexture()
{
UnSetTexture();
Gl.glBindTexture(renderTarget, textureID);
Gle.glEnable(renderTarget);
int mMode;
Gl.glGetIntegerv(Gl.GL_MATRIX_MODE, out mMode);
Gl.glMatrixMode(Gl.GL_TEXTURE);
Gl.glLoadMatrixf(bmp.TextureMatrix);
Gl.glMatrixMode(mMode);
}
The texture matrix is scaled to the pixel size so the rectangle texture extensions should work without modification on texture coordinates (without using VP)… but it just doesn’t.
It doesn’t seem to multiply and I always get the first pixel as the whole texture (when using normalized texture coordinates).
In my last C++ engine, I used to multiply by hand on each texture coordinate call, and thought this would be the solution so I don’t have to change the whole engine where texture coordinates are used…
Has anyone ever tried this? I’ve been googling and browsing the forum for this and haven’t found anything.
Should I just use POTD and make the texture matrix scale inverse? (i.e., textureSize/bitmapSize)… I’d rather use the rectangle extensions… ARB_texture_rectangle is not supported on the card I’m using (FX5600), and I still want it to have backwards compatibility with at least GF4, so it’s not a solution right now.