I have an image array - 256x256 pixels lets say. How could I apply any arbitrary scaling value to it? Any particular algorithm to use?
How do you account for the fact that there is a concrete amount of pixels on the user’s monitor, yet you scale by a fraction, like 1.3 zoom, or 0.8 zoom?
I think the most easier is in the case 1 -> 0.8 zoom, that you zoom for 8.0
every pixels in line became the same
color 8 pixels
and unzoom for 0.8
all 8 pixels became 1 pixel in the middle
/i mean under ‘middle’ that 100,80,20,5 's middle:51,25
OpenGL lets you specify a transformation matrix for texture coordinates, you may want to use that… I dont really understand your problem though.