TGA rd uncompressed

I need to read TGA file and then need to make copy and scale to to almost half the size of the image TGA file ( must be uncompressed), and must support RGB, RGBA. How can this be achieved in opengl c++?
Below is my code:

#include "tgard.h"

namespace TGA{
    
    img* loadtextur(std::string &file_name){
	std::ifstream infile( file_name.c_str(), std::ifstream::binary );
        
        char* header = new char[18];
        //infile.seekg(0, infile.beg);
        
        if(infile){
            infile.read(header, 18);
        }
        
        
        img* img1 = new img;
        img1->width = ((unsigned char)header[13]<<8) + (unsigned char)header[12];
        img1->height = ((unsigned char)header[15]<<8) + (unsigned char)header[14];
        img1->pixeldepth = (unsigned char)header[16];
        int size = img->pixeldepth/8*img1->width*img1->height;
        
        char* back = new char[size];
        
        infile.read(back,size);
        
        img1->texture = (GLubyte*)back;
        
        
        return img1;
    }
    
}

------------------------------------

#ifndef TGARD
#define TGARD
#include <stdlib.h>
#include <stdio.h>
#include <fstream>

#ifdef 
  #include <OpenGL/gltypes.h>
  #else
  #include <GL/gl.h>
#endif
namespace TGA{
    struct img{
        int height;
        int width;
        int pixeldepth;
        GLubyte* textur;
    };

    img* loadtextur(std::string &filename);
}

jxta2018
If I understand you right, then take a look at
opengl texturing

I need to basically read TGA uncompressed file, then make a copy of it and scale it to half the size of the actual TGA file. Are there any algorithms to do this? Please help me.

Where does opengl come into the picture? Giving your variables an opengl type of name (like texture) does not make it an opengl type.

You can do it yourself though:
You could make two loops (row & column) and add two column-member values together with the same two column-members of the next row & divide with 4. And write it back into a new img. I see how opengl could be a part: if you scale differently … but it would be ever so cumbersome just for that. It’s not an algorithm but a full opengl setup.

Do you mean to say this can be achieved without opengl? I guess TGA file supports only RLE. So below is what I have tried so far. Any suggestion?

#include "tgard.h"

namespace TGA{
    
    img* loadtextur(std::string &file_name){
	std::ifstream infile( file_name.c_str(), std::ifstream::binary );
        
        char* header = new char[18];
        //infile.seekg(0, infile.beg);
        
        if(infile){
            infile.read(header, 18);
        }
        
        
        img* img1 = new img;
        img1->width = ((unsigned char)header[13]<<8) + (unsigned char)header[12];
        img1->height = ((unsigned char)header[15]<<8) + (unsigned char)header[14];
        img1->pixeldepth = (unsigned char)header[16];
        int size = img->pixeldepth/8*img1->width*img1->height;
        
        char* back = new char[size];
        
        infile.read(back,size);
        
        img1->texture = (GLubyte*)back;
        
        
        return img1;
    }
    
}


-------------------------------------


#ifndef TGARD
#define TGARD
#include <stdlib.h>
#include <stdio.h>
#include <fstream>

#ifdef 
  #include <OpenGL/gltypes.h>
  #else
  #include <GL/gl.h>
#endif
namespace TGA{
    struct img{
        int height;
        int width;
        int pixeldepth;
        GLubyte* textur;
    };

    img* loadtextur(std::string &filename);
}

You’re reading bytes from a file, doing some processing, then writing the result out to a file. None of that requires OpenGL. Even the things that OpenGL is designed for (3D rendering) don’t actually require OpenGL, which is just an interface to hardware which is optimised for that task.

For reading and writing images, you’re usually better off using a library, particularly if the format has many variants and/or compression. Scaling an image by a factor of 2 can be as simple as just using every other pixel on every other line, or averaging each 2x2 block of pixels to a single pixel.

In theory, OpenGL can use the GPU to accelerate the process, but 2:1 scaling is so trivial that the overhead of transferring the data to and from the GPU may take longer than just using the CPU.

Do you know if any sort of algorithm can help to achieve copy of actual image to scale down to half the size of actual TGA( must be uncompressed), and must support RGB, RGBA? Is there any way to cut the physical dimension into half and compress it for TGA file?