Problem about coordinates relation

I need simulate a camera with OpenGL: given a 3D model and its coordinates in space, the camera focal length f, the size of one CCD cell (widthlength), and the size of the CCD array(MN), to render an image as if shot by a camera.
I’ve read many OpenGL reference documents but not found how OpenGL’s frame buffer is related to spacial coordinates. Is here anyone can help me? Thank you.

Nobody knows? Let me put it another simpler way: given a 3D model, how to generate an M*N size image using OpengGL? Anybody can help?

I think the problem is what you are asking is an algorithmic question, not an OpenGL question… There are literally hundreds of approaches.

What you are trying to do (if I understand) is relate the ‘pixels’ of a CCD to a 3D scene, right?

There are several ways to do this.

Trivially using totally fixed function pipeline immediate mode commands:

An array of unit cubes, arranged in an M*N grid, and then colour each cube according to the individual CCD cells. That would be trivial, and then you could zoom out your OpenGL viewpoint to make it go from a big area of coloured squares / cubes, into a picture…

You could get the data (for the ‘image’ the camera is taking) from a file you create.

Less trivially:
…get the colour data from a texture, which you could map onto the cubes using texture coordinates which span all the cubes.

Not trivial:
Using shaders and the like to simply texture your image in a pixelated way onto a single QUAD. There is an example of doing this with regards to simulating big tv screens on the LightHouse 3D tutorial site.

What I would call ‘advanced’:
Use a texture input to a shader to drive a geometry shader to spawn unit Quads as individual CCD ‘pixels’.

As you have a good understanding of cameras the OpenGL Projection and ModelView matrices should be easy for you to grasp as you’ll need these to setup your viewpoint and the scene.

Other indicators, text, camera bits could be drawn simply as models or geometric primitives.

With regards to the coordinate system. A good place to start is The Red Book. In fact one of it’s chapters uses camera and viewpoint examples to explain the OpenGL coordinate system etc.

Is that the kind of general guidance you are looking for?

Thanks for your reply, but you mistook my problem. I know how to project a 3D model onto a 2D plane in pure mathmatics; I can render an image with only CPU, but it’s too slow, especially for complex model and large image. So I try resorting to GPU with the help of OpenGL. My problem is, how does OpenGL discretize the image plane into pixels? Or, put it another way, I want a ‘pixel’ representing a given area in 2D, how should I do with OpenGL?
Note that I’m talking about ‘image’, not ‘graphics’.

Rasterization rules are covered in the spec starting on page 90. The rules vary depending on the primitve being rendered.

Coordinate transformations start on page 40.