glReadPixel -- How to speed up?

I am using glut for window management. For my application I would like to do offscreen rendering, so I am currently using glReadPixel() to pull pixel data from the framebuffer. However, this kills my framerate. This drops it to 7 FPS from 1500 FPS. Any suggestions on how to speed up the glReadPixel() function? I’ve tried disabling unnecessary states, but to no avail.
Following are my init and draw functions:

void init(void)
	glClearColor(0, 0, 0, 0);
	glPixelStorei(GL_UNPACK_ALIGNMENT, 8);
	glPixelStorei(GL_PACK_ALIGNMENT, 8);

void drawStuff(void)
	glRotatef(rot,0.0f,1.0f,0.0f);	glutWireSphere(1,20,20);

void display(void)
	glReadPixels( 0, 0, glWIDTH, glHEIGHT, GL_LUMINANCE, GL_FLOAT, bufferImage);
	FPS();						glutSwapBuffers();

There’s no way that glReadPixels() is responsible for such a drastic frame rate drop…something else is going on.

Be absolutely sure that you need to actually pull the pixels off of the board into system memory for your application. There are other render targets you can render to for offscreen rendering.
You’ve got PBOs,PBuffers and the new trumping FBO (frame buffer objects - only currently supported on NV hardware AFAIK).

if you determine that you do indeed need glReadPixels functionality, well then you’re stuck with it. The only way to speed it up is to go to PCI-Express if you haven’t done so already.

Thank you for your comment. I need to access the pixel values directly for use with an optimization program, and at the moment the only way I can see of doing this is with glReadPixel(), I can’t acces pbuffers directly from system calls can I? Interestingly enough, when I run this same program on a different computer with a NVIDIA gpu I get 85 FPS with and without the glReadPixel() call. That corresponds exactly to the refresh rate. Is there a place I can find out more about the differences in particular video cards? Also, does anyone know where I can go to find an extension that disables the cap on FPS to the refresh rate?

W.r.t. the last bit:
seems like you’re using glSwapBuffers. If you’re only rendering offscreen, why are you even swapping?

Even if you are using swapbuffers, Vsync has to be enabled using an OS dependent call. I believe the windows function is something like wglSwapInterval. If you don’t call that, that should be enough.
It is of course possible that there’s a driver setting that forces vsync. (E.g. on my Linux box, there’s an environment variable that does it)
On Windows, it would typically be a checkbox in the video driver advanced settings.