Noise while mapping input RGB values using CPU and GPU

I was trying to map the input image RGB value with gray-scale value using a mapping texture which looks like this:

So basically the idea is, we take r and g components of input pixel, then access the mapping using r component as column and g component as row. Using this convention would yield us a value in range 0-255 or 0.0 - 1.0 in context of OpenGL.

I tried to perform the same operation using GPU and CPU, but both outputs differ by a significant amount.

I am trying to Map RGB input pixel to some pre-defined palette, using the closest neighbour technique. The following problem arose performing an intermediate step in the solution. I would also like to know if there is a better way to solve the problem ?

GPU fragment shader:

    #version 100
    precision mediump float;
    varying vec2 texture_mapping;
    uniform sampler2D texture;
    uniform sampler2D mapping;
    void main() {
       vec4 texel = texture2D(texture, texture_mapping);
       gl_FragColor = texture2D(mapping, vec2(texel.r, texel.g));

CPU Code:

    for (int row=0; row<originalImage.rows; row++) {
        for (int col=0; col<originalImage.cols; col++) {
            cv::Vec4b pix =<cv::Vec4b>(row, col);
            cv::Point mappingPoint = cv::Point(pix.val[0], pix.val[1]);

            cv::Vec4b mappedPixel =<cv::Vec4b>(mappingPoint);
  <cv::Vec4b>(row, col) = mappedPixel;

I got Following results:

Input image:

GPU Output:

CPU Output: