I am writing an image comparision tool, which does a pixel-by-pixel comparison of RGBA values of the 2 images.
Can I add some intelligence to my code so that I can compare images, which were generated using different graphics cards?
Any suggestions would really help.
Consider that the OpenGL spec does not require pixel-equivalence. As such, you’re going to have to have some tolerance value associated with the comparison function.
Does this have something to do with checking for cheating in the drivers?
You can combine a tolerance-based algorithm with an LCS length algorithm. (LCS = Longest common substring)
However, LCS requires a much greater computational power.