Texture-mapping, again

I have moved my ‘work-bench’ to a new, more powerfull pc. The tools has been updated (code::blocks, mingw), but the extensive code and setup is the same. The opengl 3.3 core profile is granted, but discrete errors appears as I tap through the code.
The basic setup of two programs, one using a texture, executes, but lost the abillity to view extensions. That’s unimportant for now.
I have streamlined building program-objects, so, when a texture is needed I use the same code (including binding to GL_TEXTURE0)
If I add another program, also using a texture, this streamlined load() errs at
glActiveTexture(GL_TEXTURE0)

This would be rational if the GL_TEXTUREi is an über-entiti above the program-objects … just leaving the oddity why it would execute on the other pc. I would like your consent, that this is the likely problem (that I should use a new GL_TEXTUREi for every texture engaged in an execution) before I start rearranging my code.

The only documented error for glActiveTexture is

So I suggest checking your error-checking code.

Hi GClements,
there is no error-code … the execution breaks at
glActiveTexture(GL_TEXTURE0)

I’ve copied the section of the debug-trace … it looks as if the code again breaks on the none-standard encoding (me using strings with danish odd vovls). I’m using some of the meny cores to compile … maybe the place of breaking is caused by another thread failing. Here is some relevant clip of the trace with my comments

//breakpoint before entering the problem or me stepping into the next line:

[debug]Thread 1 hit Breakpoint 9, sys::set_texture (image_Prog=image_Prog@entry=0x34d1b20, img=0x9df9040 'ÿ' <repeats 200 times>..., width=1624, height=500) at D:\shared\saves\before_rearrange_2\rootFunctions\test\main.cpp:1228

I believe that the debug output is a mix from different threads.
the
'ÿ' <repeats 200 times>...
part is out of place (happens earlier). The real function-call matches:
sys::set_texture (image_Prog=image_Prog@entry=0x34d1b20, img=0x9df9040 , width=1624, height=500)

(notice the y-repeat section clipped out)

the glActiveTexture(GL_TEXTURE0) is at line 1228.

[debug]D:\shared\saves\before_rearrange_2\rootFunctions\test\main.cpp:1228:52647:beg:0x40253f
    [debug]>>>>>>cb_gdb:
    At D:\shared\saves\before_rearrange_2\rootFunctions\test\main.cpp:1228
    [debug]> next
    [debug]0x00007ffa133f9060 in ?? ()
    [debug]>>>>>>cb_gdb:

In ?? () ()
[debug]> bt 30
[debug]#0  0x00007ffa133f9060 in ?? ()
[debug]Backtrace stopped: previous frame identical to this frame (corrupt stack?)

and the debugger enters a system new-allocation.h file or something, before a segmentation-fault closes all. (or what? the rest of the trace is appended below)

To recapitulate: There is nothing wrong with using GL_TEXTURE0 multiple times then … ??
It may be the odd danish vovls at play.
I’m not sure of the contents of the rest: It seems as thread 1 that was about to enter the glActiveTexture(GL_TEXTURE0), is the one that produces a sigtrap (doesn’t sound like a sig-fault)

[debug]>>>>>>cb_gdb:
[debug]> break "D:/shared/saves/before_rearrange_2/rootFunctions/test/main.cpp:1247"
[debug]Breakpoint 10 at 0x4027f9: file D:\shared\saves\before_rearrange_2\rootFunctions\test\main.cpp, line 1247.
[debug]>>>>>>cb_gdb:

Continuing...

[debug]> cont
[debug]Continuing.
[debug][New Thread 8000.0x9c]
[debug][New Thread 8000.0x914]
[debug]Thread 1 received signal SIGTRAP, Trace/breakpoint trap.
[debug]0x00007ffa4b7eed03 in ntdll!RtlIsZeroMemory () from C:\Windows\SYSTEM32\ntdll.dll
[debug]>>>>>>cb_gdb:

In ntdll!RtlIsZeroMemory () (C:\Windows\SYSTEM32\ntdll.dll)

[debug]> bt 30
[debug]#0  0x00007ffa4b7eed03 in ntdll!RtlIsZeroMemory () from C:\Windows\SYSTEM32\ntdll.dll
[debug]#1  0x00007ffa4b7f7ae2 in ntdll!RtlpNtSetValueKey () from C:\Windows\SYSTEM32\ntdll.dll
[debug]#2  0x00007ffa4b7f7dca in ntdll!RtlpNtSetValueKey () from C:\Windows\SYSTEM32\ntdll.dll
[debug]#3  0x00007ffa4b7fd7f1 in ntdll!RtlpNtSetValueKey () from C:\Windows\SYSTEM32\ntdll.dll
[debug]#4  0x00007ffa4b709640 in ntdll!RtlAllocateHeap () from C:\Windows\SYSTEM32\ntdll.dll
[debug]#5  0x00007ffa4b705d21 in ntdll!RtlFreeHeap () from C:\Windows\SYSTEM32\ntdll.dll
[debug]#6  0x00007ffa4a779c9c in msvcrt!free () from C:\Windows\System32\msvcrt.dll
[debug]#7  0x0000000000402717 in __gnu_cxx::new_allocator<char>::deallocate (__p=<optimized out>, this=0x15bfc50) at D:/CodeBlocks/MinGW/lib/gcc/x86_64-w64-mingw32/8.1.0/include/c++/ext/new_allocator.h:116
[debug]#8  std::allocator_traits<std::allocator<char> >::deallocate (__n=<optimized out>, __p=<optimized out>, __a=...) at D:/CodeBlocks/MinGW/lib/gcc/x86_64-w64-mingw32/8.1.0/include/c++/bits/alloc_traits.h:462
[debug]#9  std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::_M_destroy (__size=<optimized out>, this=0x15bfc50) at D:/CodeBlocks/MinGW/lib/gcc/x86_64-w64-mingw32/8.1.0/include/c++/bits/basic_string.h:226
[debug]#10 std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::_M_dispose (this=0x15bfc50) at D:/CodeBlocks/MinGW/lib/gcc/x86_64-w64-mingw32/8.1.0/include/c++/bits/basic_string.h:221
[debug]#11 std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::~basic_string (this=0x15bfc50, __in_chrg=<optimized out>) at D:/CodeBlocks/MinGW/lib/gcc/x86_64-w64-mingw32/8.1.0/include/c++/bits/basic_string.h:647
[debug]#12 sys::set_texture (image_Prog=image_Prog@entry=0x34d1b20, img=0x9df9040 'ÿ' <repeats 200 times>..., width=1624, height=500) at D:\shared\saves\before_rearrange_2\rootFunctions\test\main.cpp:1199
[debug]#13 0x000000000041421c in main (argc=<optimized out>, argv=<optimized out>) at D:\shared\saves\before_rearrange_2\rootFunctions\test\main.cpp:2674
[debug]>>>>>>cb_gdb:
[debug]> frame 7
[debug]#7  0x0000000000402717 in __gnu_cxx::new_allocator<char>::deallocate (__p=<optimized out>, this=0x15bfc50) at D:/CodeBlocks/MinGW/lib/gcc/x86_64-w64-mingw32/8.1.0/include/c++/ext/new_allocator.h:116
[debug]D:\CodeBlocks\MinGW\lib\gcc\x86_64-w64-mingw32\8.1.0\include\c++\ext\new_allocator.h:116:3596:beg:0x402717
[debug]>>>>>>cb_gdb:

#7  0x0000000000402717 in __gnu_cxx::new_allocator<char>::deallocate (__p=<optimized out>, this=0x15bfc50) at D:/CodeBlocks/MinGW/lib/gcc/x86_64-w64-mingw32/8.1.0/include/c++/ext/new_allocator.h:116
D:\CodeBlocks\MinGW\lib\gcc\x86_64-w64-mingw32\8.1.0\include\c++\ext\new_allocator.h:116:3596:beg:0x402717
At D:\CodeBlocks\MinGW\lib\gcc\x86_64-w64-mingw32\8.1.0\include\c++\ext\new_allocator.h:116

Is the context current on that thread ? Crashes are common errors in multithreaded OpenGL. Another error your might encounter on some drivers, is Invalid operation.

hi,
I belive it is.
I’m not writing concurrent code.
The code::blocks has a setting where you can choose the number of threads you want to use for compilation. I cannot tell how exact it is, but as for speed it’s conviniently swifth. It obviously impacts the debug-trace output. I’m not used to follow info on the debugger, but it has mis-behaved, so I kind of had to. … turned out that the workplace was using one project in debug-mode and another in release-mode … no good mix.

I’m not sure, but this will have to be rooted like this:
I construct the program-object before adding the texture (where things goes wrong).
On building the program I do fetch a vector<string> const_names and loops to build font-tex-coords for all the letters in each string. A few of these names contains danish odd vovls. It is these names that comes to mind when I see:
'ÿ' <repeats 200 times>...
and ponder if this is the true foul-up. It happens (in the readable code) right before building the texture.

The first line after glActiveTexture(GL_TEXTURE0) is a glGetError() with text-output. There is no indication that it’s triggered. Either there is no error or the execution just goes ballistic.

The ‘ÿ’ character is just ‘\xff’, i.e. the all-ones bit pattern. That would be expected for e.g. image data where the image has a white background.

OK but you might be in a multithreaded environment ? This makes me feels this:

Another thing that might make such functions to crash are issues with your memory. You might have some memory corruption (most probably buffer overflow or writing to incorrect memory location).

Also, the “repeat N times” is a normal thing in gdb when gdb displays a string, where this string starts with the same N characters.

hi @Silence
It’s delt with. I turned the parallel compilation off.
I’ve followed the debugger to a

glGenBuffers(1, &imgProg->mBuffers->IBO);

where the debugger throws something eqvivalently to this:

I’m not always sure if I use the C++ syntax properly, but
imgProg is a new instance of my ‘program-container’
mBuffers is a new instance a ‘trivially copyable’ struct (GLuint VAO; IBO etc) within the container
I have erected two other ‘programs’ before this third. They use the same base-class and has no problems.
this lies before getting to the ‘odd vovls’ and the glActiveTexture(…) stuff.

It’s not because there is not room for errors: I switched to a new pc, an updated tool-chain and from C++11 to C++17 …

There is a compiler-flag that I ought to set, but has doubts about: compiling for target 32 bit or 64 bit. I assume, that this deals with the the environment the generated executable wil be employed in and not about the inner openGL may all be 32 bit.
This seems irrelevant now: I went through what I could find on texture and didn’t seem to find an answer if binding to the same texture-unit (when erecting a texture in each of two different program-objects) is problematic.

… edit:
Everytime new is called, the debugger/compiler halts as if it was a break-point. One of the functions (returning an indifferent bool) missed a return value. I added it and I could push the compilation through to completion. I don’t think that the ‘painting’ is ok, but that’s a problem that we can deal with. I did turn the flag to 64 bits on … and produce debugging-symbols, whatever that means.

Thank you for your suggestions

Sorry but I don’t see the point between parallel compilation and the fact that it would produce or not a multithreaded program.

This generally shows that you have memory corruption.

More targeted to a stack corruption (infinite recursion ?). Sometimes a weird syntax that was accepted as a meaning of what you wanted to do in a compiler would be translated to another meaning in another compiler.

Also as a general rule, changing your compiler is a good thing to reveal bad-code. Since you are using gnu tools, try cppcheck or clang. They can analyse your code. For checking memory corruption, valgrind is a good friend.

…hm…, I’m not writing a multi-threaded program. When the compiler has been set to use multiple threads then, any notion of threads in the debug-output would have to be on behalf of the compiler (to be clear: it’s the compiler that does a multi-threaded execution of itself). If my multi-core pc takes on to execute the single-threaded program I write, as multi-threaded, it will have to happen on its own. I won’t exclude that my pc can find safe places to execute an extra thread, but I don’t think so.

Whatever has been going on seems to have been resolved and will probably be impossible to recreate in a way to make it possible to persue the details. It feels like a compiler/debugging setting, that allocations produces a halt on the debugging. Continuing after such a break seems to make the compiler accept the allocation and pass without a halt on next compilation.
Inspite of all this squabble, I havn’t changed a line of code, and it now executes as expected.

I appreciate your help @Silence

That’s an overstatement.
Two of three shader-program-objects draws to the screen. Two of them uses a rectangle-texture (font) and uses the same initiation-code (including binding to TEXTURE0). I tried to redo this and bind to TEXTURE0 and TEXTURE0+1, but still nothing on the screen. Not even a color bypassing the texture-lookup. There is no reason to doubt validity of attribute-data.
Every example-code I can lay my hand on (5 books) has no relevant example, so I tried to approach as in multi-texturing. Looking at general draw-initiation-code I got to think that I may miss out on enabling the sampler? That’s not needed on one-texture programs running alone.
After all, I do need to enable the attrib-pointers when switching program and such, so why not the sampler?
Can anyone comment?

relevant code used:

    glActiveTexture(GL_TEXTURE0+1); //
    sys::checkErrors("error.set img Buffer 0.9 ");
    glBindTexture(GL_TEXTURE_RECTANGLE, image_Prog->mBuffer->TBO_1 );
    glPixelStorei(GL_UNPACK_ALIGNMENT, 1);

    sys::checkErrors("error.set img Buffer 1 ");


    glTexParameteri(GL_TEXTURE_RECTANGLE , GL_TEXTURE_WRAP_S , GL_CLAMP_TO_EDGE ) ;//GL_CLAMP_TO_EDGE
    glTexParameteri(GL_TEXTURE_RECTANGLE , GL_TEXTURE_WRAP_T , GL_CLAMP_TO_EDGE ) ;
    glTexParameteri(GL_TEXTURE_RECTANGLE , GL_TEXTURE_MIN_FILTER , GL_LINEAR ) ;//GL_NEAREST
    glTexParameteri(GL_TEXTURE_RECTANGLE , GL_TEXTURE_MAG_FILTER , GL_LINEAR ) ;
    glTexParameteri(GL_TEXTURE_RECTANGLE , GL_TEXTURE_MAX_LEVEL , 0 ) ;

    glTexImage2D(GL_TEXTURE_RECTANGLE, level , internalFormat , texWidth , texHeight, border, pixFormat ,GL_UNSIGNED_BYTE, img);//(const GLvoid*)
    sys::checkErrors("error.set img Buffer 3 ");

    glActiveTexture(GL_TEXTURE0);
    glBindTexture(GL_TEXTURE_RECTANGLE,0);


	
            void initDraw(){
                //cout << "img_prog initDraw()\n"s;
                test_for_error("before img_program.initDraw() 4");// throws error
                glDisable(GL_BLEND) ;
                glUseProgram( mBuffer->Program );
                glBindVertexArray(mBuffer->VAO);
                glBindBuffer(GL_ARRAY_BUFFER, mBuffer->VBO);
                GLuint sz=mAttribs.type_size();
                for(GLuint i=0;i<sz;i++){
                    glEnableVertexAttribArray(i);
                }
                glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, mBuffer->IBO);
                glActiveTexture(GL_TEXTURE0+1);
                glBindTexture(GL_TEXTURE_RECTANGLE, mBuffer->TBO_1) ;
                update_sampler_rect_byName("aTexture",1);
                glViewport( current_viewPort.LowerLeft_X,current_viewPort.LowerLeft_Y, current_viewPort.width,current_viewPort.height);
                glEnable(GL_BLEND) ;

                test_for_error("img_program.initDraw() 4");//
            };





            void update_sampler_rect_byName(const string& navn, GLint tex_unit){
                for(GLuint i=0;i<uniform_values.size();i++){
                    if(uniform_names.at(i).first==navn){
                        cout << "update_sampler_rect_byName: " << navn << ", tx-unit: " << tex_unit << "\n" ;
                        glUniform1i(uniform_values.at(i),tex_unit);
                        break;
                    }
                }
            }






            GLint setLocation( const pair<string,eUnifType>& uniformName ){
                GLint gi=0;
                if(mBuffer->Program!=0){
                    gi =glGetUniformLocation(mBuffer->Program,uniformName.first.c_str());
                    cout << "glGetUniformLocation returns " << gi << " for " << uniformName.first.c_str() << "\n" ;
                    if(gi>-1){
                        uniform_values.push_back(gi);
                    }
                    else{
                        //uniforms defined, but not used in shader will not be locatable
                        test_for_error("program::base glGetUniformLocation error");
                        uniform_values.push_back(gi); // check for -1
                        cout << uniformName.first << " - program_base.setLocation.Uniform not registered\n" << char(7) ;
                    }
                }
                else{
                    cout << "program.setLocation() fails. no program\n\a";//<< char(7);
                }
                return gi;
            }

No errors triggered. Only a warning in shader-infolog: sampler-value not sat. This should be done with in the init-draw function above.

You seem to unbind a texture from another unit.

Apart from this, really I don’t know. But as I told before, building with parallelism is not related with the fact that you produce a multithreaded program or not. You probably should start again from here. Maybe by building your program by hand: write your own Makefile and run make on it or by directly calling gcc/g++ from the command line, and with a simple program.

Also mingw can be tricky and is not an exact reflect of what is and how gnu works under Linux for example. I remember having issues with dynamic and static linkage. Linking statically could be safer.

hi @Silence
Nice to get a tweet … I’m somewhat lost.

Yes, I’m unbinding. The lines are cut/pasted from a function that at this point has done it’s job. I’ll be somewhat jumping from program to program when I draw so I need to have bindings and unbindings done properly.
[edits: … get it! … the code is not unbinding the texture-object! ]
I’m not sure that I understand your talk about parallelism. code::blocks has an option to set how meny threads I want the compiler to use … I’ve sat it to default 0. An author mentions that the parallelism may cause sampler to be bound too late, so I’ll see to it that it doesn’t happen.
And, you’r touching on sensible point on issues with dynamic and static linkage.
The library I compile along with the main app does not have anything opengl-specifics within. And, it’s sucessfully used by the other program-objects.

I’m stubborn, but … this is way too far from my experience and learning-intents.

now, for the good news:
I’ve been trailing through the opengl specifications with a focus on texturing & samplers, profiles, new functionallity and deprecations. I never considered the difference between ver. 3.2 and 3.3 of any importance - the extensions-viewer adds only one irellevant new item to 3.3. I have the red-book for ver 3.1 & 3.2 and have used it. It contains nothing about sampler-objects (generate_sampler etc) … as doesn’t any code-examples I can recall. But, sampler-objects is presented in the core 3.3 specs (this is the version I’m compiling for). It’s another way of setting texturing code up that I havn’t tried yet.

Back to the oddities: the program-object has no output at all. It’s not just the texturing that fails. This points to some rational errors: the (simple d2 rectangles ) presents themselves with the wrong side out. It’s a posibillity since I’m fiddeling with presentations in an upper-left (windows) coordinate-system. Or, everything drawn with full transparency.
but…

I’ll go try the sampler-object thing …

Here are the two texture setups.
Difference is *data and hench pixFormat, And texture unite used.

bool set_texture( glSystem::program_base* image_Prog, unsigned char* img, GLint width, GLint height ){
    glSystem::test_for_error("enters set_texture\n") ;
    GLint texWidth=(GLint) width ;
    GLint texHeight=(GLint) height ;
    GLenum pixFormat=(GLenum)GL_RED ;//of pixel-data
    GLenum internalFormat=(GLenum)GL_RGBA ;
    GLint level=(GLint)0,border=(GLint)0;
    glSystem::test_for_error("error.enters set_texture 2\n") ;
	
	glGenTextures(1, &image_Prog->mBuffer->TBO_1);
    glSystem::test_for_error("error.enters set_texture 1\n") ;
    cout << "glGenTextures: " << image_Prog->mBuffer->TBO_1 << "\n" ;
    sys::checkErrors("error.set img Buffer 0.8 ");
    glActiveTexture(GL_TEXTURE0+1); //
    sys::checkErrors("error.set img Buffer 0.9 ");
    glBindTexture(GL_TEXTURE_RECTANGLE, image_Prog->mBuffer->TBO_1 );
    glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
    sys::checkErrors("error.set img Buffer 1 ");
    glTexParameteri(GL_TEXTURE_RECTANGLE , GL_TEXTURE_WRAP_S , GL_CLAMP_TO_EDGE ) ;//GL_CLAMP_TO_EDGE
    glTexParameteri(GL_TEXTURE_RECTANGLE , GL_TEXTURE_WRAP_T , GL_CLAMP_TO_EDGE ) ;
    glTexParameteri(GL_TEXTURE_RECTANGLE , GL_TEXTURE_MIN_FILTER , GL_LINEAR ) ;//GL_NEAREST
    glTexParameteri(GL_TEXTURE_RECTANGLE , GL_TEXTURE_MAG_FILTER , GL_LINEAR ) ;
    glTexParameteri(GL_TEXTURE_RECTANGLE , GL_TEXTURE_MAX_LEVEL , 0 ) ;

    glTexImage2D(GL_TEXTURE_RECTANGLE, level , internalFormat , texWidth , texHeight, border, pixFormat ,GL_UNSIGNED_BYTE, img);
    sys::checkErrors("error.set img Buffer 3 ");
    image_Prog->mBuffer->print();
    return true;

}

void setCompressedGlyphBuffer32(glSystem::program_base* glyphProg,  const image::mish::col4* data){//const
    GLint texWidth=(GLint)840;
    GLint texHeight=(GLint)800;
    GLenum pixFormat=(GLenum)GL_BGRA ;
    GLenum internalFormat=(GLenum)GL_RGBA ;
    GLint level=(GLint)0,border=(GLint)0;
	
	glGenTextures(1, &glyphProg->mBuffer->TBO_1);
    sys::checkErrors("set glyph Buffer 0.8 ");
    glActiveTexture(GL_TEXTURE0);
    sys::checkErrors("set glyph Buffer 0.9 ");
    glBindTexture(GL_TEXTURE_RECTANGLE, glyphProg->mBuffer->TBO_1 );
    glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
	sys::checkErrors("set glyph Buffer 1 ");

    glTexParameteri(GL_TEXTURE_RECTANGLE , GL_TEXTURE_WRAP_S , GL_CLAMP_TO_EDGE ) ;
    glTexParameteri(GL_TEXTURE_RECTANGLE , GL_TEXTURE_WRAP_T , GL_CLAMP_TO_EDGE ) ;
    glTexParameteri(GL_TEXTURE_RECTANGLE , GL_TEXTURE_MIN_FILTER , GL_LINEAR ) ;//GL_NEAREST
    glTexParameteri(GL_TEXTURE_RECTANGLE , GL_TEXTURE_MAG_FILTER , GL_LINEAR ) ;
    glTexParameteri(GL_TEXTURE_RECTANGLE , GL_TEXTURE_MAX_LEVEL , 0 ) ;

    glTexImage2D(GL_TEXTURE_RECTANGLE, level , internalFormat , texWidth , texHeight, border, pixFormat ,GL_UNSIGNED_BYTE,data);

    sys::checkErrors("set glyph Buffer 3 ");
}

in main() I call functions that erects 3 program-objects. In the following order:

  1. simple no-texture program
  2. well-working glyphProg using setCompressedGlyphBuffer32(…) on tex-unit 0
  3. imp-program trying to set up a font-test very much like the glyphProg - on tex-unit 1

Since the validity of the very program-object seems to err, I did a rearrange of their erection-order to:
3) … the faul program
1)
2)

This made an unexpected change on the visual appearence of the working font in 2).
I do expose both data-sets used for a convolution that so far has softened both comfortably.
The new visual appearence now blurrs the font to the extreme. Looks kinda funny, but very different.
What’s that all about?

You might try your code on a different machine / OS / compiler.
When things are weird like this I suggest to start from scratch. Make an empty project and fill it until you reach the same error or it works.

@Silence,
The death has to have a cause.
You cannot see the font-output. like a 1x1 is spread out to 4x4 (or 4 instead of 1). My intuition tells me, that the first input-format sticks (GL_RED) and distorts the output when GL_BGRA is the reality … or something along those lines. It still points to the sampler-error that I havn’t looked at.

addition edit:
I’ve moved the
gl.uniform1i(sampler_location , 0); //setting the txUnit-value of the particular sampler
to the line following
glTexImage2D(...)
to line my code up in proper order of execution according to other code-examples.
This choise spawns an invalid operation error (1282) from the glGetError.
The console gets an extra line of unreadable output

Ok … have employed the sampler object approach: Here the code that has changed. The shaders and program-generating stays unchanged. After activating the program, image_Prog->update_sampler_rect_byName() stopped generating error.
The infoLog reports everything ok (not complaining of no sampler-value)
Still nothing on the screen.

void draw( vector<text::aux::courier::word_indirect>& iWord ){
	initDraw();
	for(GLuint i=0;i<iWord.size();i++){
		glDrawElementsBaseVertex( GL_TRIANGLES,(GLsizei)6*iWord.at(i).letter_count , GL_UNSIGNED_INT, (GLvoid *)+0, iWord.at(i).base_vertex );
	}
	endDraw();
}

void init_state(){
	glUseProgram(mBuffer->Program);
	glEnable(GL_LINE_SMOOTH);
	glEnable(GL_PROGRAM_POINT_SIZE) ;
	/*
	glEnable(GL_CULL_FACE);
	glFrontFace(GL_CCW);
	glCullFace(GL_BACK);
	*/
	glEnable(GL_BLEND) ;
	glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
};

void initDraw(){
	glDisable(GL_BLEND) ;
	glUseProgram( mBuffer->Program );
	glBindVertexArray(mBuffer->VAO);
	glBindBuffer(GL_ARRAY_BUFFER, mBuffer->VBO);
	GLuint sz=mAttribs.type_size();
	for(GLuint i=0;i<sz;i++){
		glEnableVertexAttribArray(i);
	}
	glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, mBuffer->IBO);
	glActiveTexture(GL_TEXTURE0+1);
	glBindTexture(GL_TEXTURE_RECTANGLE, mBuffer->TBO_1) ;
	glBindSampler(1,mBuffer->SAMPLER_1);
	glViewport( current_viewPort.LowerLeft_X,current_viewPort.LowerLeft_Y, current_viewPort.width,current_viewPort.height);
	glEnable(GL_BLEND) ;
};
				
				

				
bool set_texture( glSystem::program_base* image_Prog, unsigned char* img, GLint width, GLint height ){
	GLint texWidth=(GLint) width ;
	GLint texHeight=(GLint) height ;
	GLenum pixFormat=(GLenum)GL_RED ;//of pixel-data
	GLenum internalFormat=(GLenum)GL_RGBA ;
	GLint level=(GLint)0,border=(GLint)0;

	glUseProgram(image_Prog->mBuffer->Program)    ;
	image_Prog->update_sampler_rect_byName("font_texture",1);
	glGenSamplers(1,&image_Prog->mBuffer->SAMPLER_1);

	glBindSampler(1,image_Prog->mBuffer->SAMPLER_1);
	glSamplerParameteri(image_Prog->mBuffer->SAMPLER_1 , GL_TEXTURE_WRAP_S , GL_CLAMP_TO_EDGE ) ;
	glSamplerParameteri(image_Prog->mBuffer->SAMPLER_1 , GL_TEXTURE_WRAP_T , GL_CLAMP_TO_EDGE ) ;
	glSamplerParameteri(image_Prog->mBuffer->SAMPLER_1 , GL_TEXTURE_MIN_FILTER , GL_LINEAR ) ;
	glSamplerParameteri(image_Prog->mBuffer->SAMPLER_1 , GL_TEXTURE_MAG_FILTER , GL_LINEAR ) ;
			
	glGenTextures(1, &image_Prog->mBuffer->TBO_1);
	glActiveTexture(GL_TEXTURE0+1); 
	glBindTexture(GL_TEXTURE_RECTANGLE, image_Prog->mBuffer->TBO_1 );
	glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
	
	glTexParameteri(GL_TEXTURE_RECTANGLE , GL_TEXTURE_MAX_LEVEL , 0 ) ;
	//has become redundant:
	glTexParameteri(GL_TEXTURE_RECTANGLE , GL_TEXTURE_WRAP_S , GL_CLAMP_TO_EDGE ) ;//GL_CLAMP_TO_EDGE
	glTexParameteri(GL_TEXTURE_RECTANGLE , GL_TEXTURE_WRAP_T , GL_CLAMP_TO_EDGE ) ;
	glTexParameteri(GL_TEXTURE_RECTANGLE , GL_TEXTURE_MIN_FILTER , GL_LINEAR ) ;//GL_NEAREST
	glTexParameteri(GL_TEXTURE_RECTANGLE , GL_TEXTURE_MAG_FILTER , GL_LINEAR ) ;
	
	glTexImage2D(GL_TEXTURE_RECTANGLE, level , internalFormat , texWidth , texHeight, border, pixFormat ,GL_UNSIGNED_BYTE, img);

	return true;
}

Problem solved.
It was rather subtle:
I’ve changed/updated glm:: and it turned out, that glm::mat4 tmpMat prefer to be initiated like:
glm::mat4 tmpMat(1) and not just
glm::mat4 tmpMat

@Silence, you certainly deserves my personal "honor of most patient openGL-therpist"
You may not know why, but I wouldn’t have got there without your input.

bwt, the code::blocks default setting of using 0 threads seems to pass the decition to code::blocks: it rebuilds the full project in 13 seconds, not the usual 1 min + 13 seconds.