Spectrograph and large texture

Iam using OpenGL in Borladn studio 2006 and I try to program spectrograph (waterfall). Its mean add one line where each pixel has different colors and scroll down the screen (one pixel down). These steps run forever.
I prepeare texture BitMap and associate one time with OpenGl rectangle constructed by GL_QUADS. New data modified this texture(its first line) and scroll down texture by glTranslatef(0,-(1./(float)actGraph->Height),0) and paint the GL_QUADS. Adding new line code:

	i = 0;
	actGraph = &Graphs[i];
	glBindTexture( GL_TEXTURE_2D, actGraph->TexName ); //choose texture
	if (Data !=NULL) {
		if (Mov == 0) glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, actGraph->Width , 1, GL_LUMINANCE , GL_UNSIGNED_BYTE, Data);
		else glTexSubImage2D(GL_TEXTURE_2D, 0, 0, actGraph->Height-Mov, actGraph->Width , 1, GL_LUMINANCE , GL_UNSIGNED_BYTE, 


	  glTexCoord2d(0.0,0.0); glVertex2d(actGraph->x1,actGraph->y1);
	  glTexCoord2d(1.0,0.0); glVertex2d(actGraph->x2,actGraph->y1);
	  glTexCoord2d(1.0,1.0); glVertex2d(actGraph->x2,actGraph->y2);
	  glTexCoord2d(0.0,1.0); glVertex2d(actGraph->x1,actGraph->y2);


	glBindTexture( GL_TEXTURE_2D, 1);

Program works fine with smaller texture size (500x300). But if I enlarge waterfall to (500x900) there is significant slowdown (3times) of painting and increase cpu time.

I suppose that I modify only one line of texture by command glTexSubImage2D (only here goes data from cpu to graphic card) and painting

glBegin(GL_QUADS) statement must be done fast because its run from graphic card directly. Isn’t it?
Is there any way, how to improve repainting large texture?

Please use [ code]/[ /code] (without space after ‘[’) around code snippets to make them easier to read.

Are you uploading the whole texture each time one line changes? You could do this more efficiently if you only update one line of the texture and adjust the texture coordinates of your quad accordingly (be sure to set wrap mode to GL_REPEAT).

glBegin(GL_QUADS) statement must be done fast because its run from graphic card directly. Isn’t it?

No, using glBegin()/glEnd() is generally the slowest way to draw (also known as immediate mode), because it requires a large number of function calls into the driver for each vertex. Using vertex buffer objects (VBO) is a way to upload data to the graphics card and draw large numbers of primitives with a single call. For your case it is very unlikely to make a significant difference, because you are only drawing a single quad and if you follow my suggestion above you’d have to modify texture coordinates each frame anyway.

I am updating only one line of texture with new data but with each new line I do glBegin(GL_QUADS) … => in my opinion its updating whole texture.
So do yo think that adjust the texture coordinates is more efficiency than glTranslatef of texture (which iam using)?
If I change tex. coordinates than I must do slow glBegin(GL_QUADS) to take effect?? And it could be same like glTranslatef?
It could be faster if I use power of 2 dimension of texture?

glTexSubImage2D(GL_TEXTURE_2D,             // target
                0,                         // level
                0,                         // x offset
                actGraph->Height-Mov,      // y offset
                actGraph->Width,           // width
                1,                         // height
                GL_LUMINANCE , GL_UNSIGNED_BYTE, Data);

Ok, my bad, I now see you are only updating one line and use the texture matrix to adjust the coordinates - that should be at least as fast as changing them by hand.

Hmm, you could use 2 textures and alternate them, one that you update and the other you use for rendering. That would introduce one frame of latency, but that is perhaps not a problem for your application?

Whether power of 2 textures make a difference depends on your hardware, on any recent (and even not so recent) desktop graphics card it should not matter.