Shader Error


I have the following fragment shader:

varying vec3 oNormal;
varying vec3 oVertexP;

uniform sampler2D color_texture;

void main()

	float pi = 3.1415;
	float x , y, z;
	vec3 viewVec = normalize(oVertexP);
	x = viewVec.x;
	y = viewVec.y;
	z = viewVec.z;
	float	u = 1.0 + (1.0/pi) * atan(x,-z); u /= 2.0;
	float	v = acos(y) / pi;
        vec4 color = vec4( texture2D(color_texture, vec2(u,v)) );
	gl_FragColor = vec4(pow( * 5.0 ,1.0/2.1),1.0); //Tone mapping
	//gl_FragColor = vec4(pow(max(gl_Color,0.0)*5.0,1.0/2.1),1.0);
	//gl_FragColor = gl_Color;

and it works perfectly under windows, but now that I want to run my code under linux it gives me this error:

Error: undefined function 'pow'                                                                                    
Error: undefined function 'pow'                                                                                    
Error: Function 'vec4' not found (check argument types)                                                            

linking with uncompiled shader

OpenGL Error: invalid value
OpenGL Error: invalid operation

what do you think have caused the problem?

Try specifying your GLSL version in the shader files:
#version 120 // GL 2.1
#version 130 // GL 3.0

#version 420 // GL 4.2

And tell us which GPUs you have and whether you are using closed or open source drivers on linux.

Are you using proprietary drivers or the Mesa GLSL compiler?

This was the problem I had, now It’s fixed :slight_smile:

ok now I have another problem!
I’m loading a HDR texture with EXR format in my program by the following code:

bool loadTexture(const char *path)
 	Imf::Rgba * pixelBuffer;
 	GLuint width;
 	GLuint height;
 		Imf::RgbaInputFile in(path);
 		Imath::Box2i win = in.dataWindow();
 		width  = win.max.x - win.min.x+1;
 		height = win.max.y - win.min.y+1;
 		Imath::V2i dim(win.max.x - win.min.x + 1,
 			win.max.y - win.min.y + 1);
 		pixelBuffer = new Imf::Rgba[dim.x * dim.y];
 		int dx = win.min.x;
 		int dy = win.min.y;
 		int order = in.lineOrder();
 		in.setFrameBuffer(pixelBuffer - dx - (dy * dim.x), 1, dim.x);
 		in.readPixels(win.min.y, win.max.y);
 	catch(Iex::BaseExc & e)
 		std::cerr << e.what() << std::endl;
 		return false;
 	//GLuint texture;
 	glGenTextures(1, &texture);
 	glBindTexture(GL_TEXTURE_2D /*GL_TEXTURE_RECTANGLE*/, texture);
 	glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
	std::cout<<"width"<< width<< std::endl;
	std::cout<<"Height"<< height<< std::endl;
 	glTexImage2D(GL_TEXTURE_2D , 0, GL_RGBA16F, width, height, 0, GL_RGBA, GL_HALF_FLOAT, pixelBuffer);
 	return true;

and I get the error “Invalid value” by function gltexImage2D. This code is working on my computer with windows but here at university I have to use Linux and it’s not showing my texture. my Texture width is 4284 and it’s Height is 2142.

I’m new to Linux but This is what i get for Opengl version by typing glxinfo:

server glx vendor string: SGI
server glx version string: 1.4
client glx vendor string: Mesa Project and SGI
client glx version string: 1.4
OpenGL vendor string: nouveau
OpenGL renderer string: Gallium 0.4 on NV92
OpenGL version string: 2.1 Mesa 7.11.2
OpenGL shading language version string: 1.20

and my graphic card is GeForce 9800 GT.

You seem to be trying to create a floating point texture on GL 2.1 which doesn’t support it.

You need (if you want support on GL 2.1)

and you also need

How to get GL 3.0 or above in linux? I’m new in linux ,I update the mesa to the latest version,but the glxinfo shows that the version of GL is 2.1. What should I do?

Yeah, Mesa in the latest version does GL 3.1 and GLSL 1.3 on Sandy Bridge CPUs. For any other GPU, there’s only GL 2.1 and GLSL 1.20 - yet.

If you want to fully utilize your card, install the proprietary drivers by NVIDIA or AMD.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.