I have a set of points which define connected lines making a 2D maze i.e. if you draw a line from one point to the next you end up back at the start and have drawn a maze.
I’ve created triangle strips from this and can draw a 3D view of maze fine. What I want to do now is create a large 2D array (of bytes) representing the maze. The resolution of the array maybe more or less than the size of the maze e.g. if the maze is 100 square then I may want a 2D array which is twice the “resolution” i.e. is 200x200. So the location  would correspond to the maze point 0.5,0.5 and  to the bottom corner 99,99.
Now I could do it all by hand i.e. “draw” the maze myself writing to the array, but what I’d like to do is set up my view and render to a texture which is the 2D array, or render to the screen and copy the pixels back to the array (if the array was bigger than the screen I would need multiple renders).
So can I set my view so that the drawn data correctly maps to the size of the array (above and looking down from some distance). Ten render to a texture using a shader to fill it? Is render to texture supported on older cards (I know I can’t rely on NPOT textures). Would reading the screen be better? Any other approaches? General pointers?