Mandelbrot shader - float accuracy problem

As a first project in GLSL I tried to make a simple Mandelbrot shader.

It runs suprisingly fast (about 5000 times faster than my implementation on the CPU), which is much more than I expected.

However, it seems that float is the only floating point data type in GLSL, which means that due to the limited precision the images look ‘pixelated’ if you zoom in too much into the set. I’ve thought about different ways to fix this, but none of them seem practicable (scaling/repositioning of the fractal is impossible due to it’s nature). I have also tried to use integers in some way, but haven’t had any success.

So, is there a way to enhance the precision of the calculation (maybe someone in this forum knows something about the fractal I don’t) or even to use 64 bit floating point variables? I thought you could use them in HLSL, but I haven’t found something similar in GLSL so far.

Vertexshader:

void main( void ) {
	gl_Position = gl_ModelViewProjectionMatrix*gl_Vertex;
	gl_TexCoord[0]  = gl_MultiTexCoord0;
}

The real and imaginary borders of the rectangle to display are passed as texture coordinates.

Fragment shader:

uniform int MaxIterations; //Just some parameters passed by the program
uniform float Treshold;

void main( void ) {
	vec4 C = gl_TexCoord[ 0 ];
	vec4 Z = C;
	vec2 CSq;
	
	int Iteration = 0;
	CSq.x = C.x*C.x;
	CSq.y = C.y*C.y;
	
	while( Iteration++ < MaxIterations && CSq.x + CSq.y < Treshold ) { //This loop is the main problem
		Z.w = CSq.x - CSq.y + C.x; //These calculations yield the same results for neighbouring pixels
		Z.y = 2.0*Z.x*Z.y + C.y;   //if the zoom is big enough, making the image look pixelated
		Z.x = Z.w;
		CSq.x = Z.x*Z.x;
		CSq.y = Z.y*Z.y;
	}
	
	float Ratio = float( Iteration )/float( MaxIterations )*7.0; //Interpolate the color to make the image more interesting
	float Rise = mod( Ratio, 1.0 );
	float Fall = 1.0 - Rise;
	Ratio = floor( Ratio );
	
	     if( Ratio == 0.0 ) gl_FragColor = vec4( Rise, 0.0, 0.0, 1.0 );
	else if( Ratio == 1.0 ) gl_FragColor = vec4( 1.0, Rise, 0.0, 1.0 );
	else if( Ratio == 2.0 ) gl_FragColor = vec4( Fall, 1.0, 0.0, 1.0 );
	else if( Ratio == 3.0 ) gl_FragColor = vec4( 0.0, 1.0, Rise, 1.0 );
	else if( Ratio == 4.0 ) gl_FragColor = vec4( 0.0, Fall, 1.0, 1.0 );
	else if( Ratio == 5.0 ) gl_FragColor = vec4( Rise, 0.0, 1.0, 1.0 );
	else if( Ratio == 6.0 ) gl_FragColor = vec4( 1.0, Rise, 1.0, 1.0 );
}

I made a small windows binary to show you the problem, which can be found here (source code is included, written in Blitz Max).
Zoom in/out using the left/right mouse button. Once you zoom in far enough, you will notice the ‘pixelation’.

I don’t think that it is possible to do it, though I’m not completely certain.

GLSL and the OpenGL pipeline are designed solely with 3d rendering in mind. 3d rendering needs to go fast, and so the features of the API and the language are closely related to the features of the hardware. It is my understanding that the graphics cards have 32-bit floating point computation units in order to do the math really fast. There hasn’t really been a need to do 64-bit computations, so that functionality was never built. For 3d rendering, 32 bits is fine. Only now, with GPGPU is this thinking (doing general math) becoming more mainstream.

Basically, GPUs are specialized, and not for this kind of math. If you can find a way to do it, let me know, but this is why I think it got left out.

Perhaps you can use some techniques mentioned in this paper:

“Accurate Sum and Dot Product” http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.2.1547

That is unfortunate, but it’s comprehensible after I’ve read your explanation.

Seems like I have to wait a bit more for 64 bit support :slight_smile:

Perhaps you can use some techniques mentioned in this paper:

“Accurate Sum and Dot Product”

Thanks for the link, but my problem is not the unaccurracy of the operations, it’s more that a float doesn’t have enough decimal places that are desired in my case.

So in other words floats aren’t accurate enough.

The article I mentioned describes how to compute the sum and dot products up to K times the working precision, using only working precision operations. By adjusting K based on the zoom level you can trade off speed for accuracy “on the fly”.

Of course I haven’t tried implementing it, so may be some unforseen issue preventing this but…