Plotting a 3D surface from x,y,z data

Hello Everyone,
I’m a brand new OpenGL programmer and I have embarked on project to plot real time X,Y,Z data to generate a 3D waterfall. I am using VS 2010 C# with the SharpGL libraries.
So far I am successful at the following:
Setting up the environment
Creating a 3D graph with axis, labels, a “floor”, rotation, translation etc.
I am now dealing with plotting the data. As a first test, I have 10 arrays of 10 data points ( Y data) in an array, with the X increment being 0.1, and the z increment being 0.1. In other words I have 10 X and 10 Z values which are constant.
Now I can plot this as triangles using the following code (only the first and second rows are shown):

   private void DrawDataTriangles(OpenGL gl)
    {
        gl.Begin(OpenGL.GL_TRIANGLES);

        numberOfSlices = 1;
        numberOfSamples = 10;

        float xPosition = 0;
        float xIncrement = 1 / numberOfSamples;

        for (int i = 0; i < numberOfSlices; i++)
        {
            for (int j = 0; j < numberOfSamples; j++)
            {
                xPosition = -j / numberOfSamples;

                //First triangle

                gl.Color(0.1, 0.2, 0.3);
                gl.Vertex(xPosition, amplitude_values_firstRow[j], 0);
                gl.Vertex(xPosition, amplitude_values_SecondRow[j], 0.2);
                gl.Vertex(xPosition - (xIncrement), amplitude_values_firstRow[j + 1], 0);

                //Second triangle
                gl.Color(0.6, 0.2, 0.7);
                gl.Vertex(xPosition, amplitude_values_SecondRow[j], .2);
                gl.Vertex(xPosition - (xIncrement), amplitude_values_firstRow[j + 1], 0);
                gl.Vertex(xPosition - (xIncrement), amplitude_values_SecondRow[j + 1], .2);
            }
        }

        gl.End();
    }

This all works well except when I increase the number of samples. My real data set will have about 2048 Y values and there will be about 100 Z rows. If I try 20,000 samples then frames rates drop to 1.5 fps. So my first question is what am I doing that I shouldn’t do? I’m sure there are much better ways, but being new I have not discovered them yet. Any pointers in this specific case where I plot a static surface?

In the real situation I will be getting new rows (arrays ) of Y data every 50ms. What I thing that should be done is instead of plotting everything all over again, I should “Push” the first two rows to the back of the screen and then just add the next row in front. How does one do this? Translate? What happens when I reach the back of the scene? Do I start deleting rows or is there a way that OpenGL will do this for me?

Like I said, I’m a total beginner in OpenGL but some hints would be great.
Thank you , Tom

Using the legacy API (glBegin/glEnd). Use glDrawElements(), preferably with vertex data and indices in buffer objects.

You can’t reasonably avoid drawing the entire frame each time.

In terms of memory management, you should use a circular buffer, with a moderate amount of space in the gap so that you’re overwriting data which hasn’t been used for a few frames.

Hi,
Thank you. Ok on the legacy part.
So, changing to glDrawElements will provide the necessary performance improvement?
Thanks

[QUOTE=tomb18;1285140]Hi,
Thank you. Ok on the legacy part.
So, changing to glDrawElements will provide the necessary performance improvement?
Thanks[/QUOTE]

i suppose you mean z = f(x, y) with 3D graph
pre-calculate the results for [0;100] x [0;100], put the results in a buffer object
build a program object (shaders), and use it to render the grid (results)

thats faster because you dont need to call any function 20.000 times, only 1 function call is needed:
–> glDrawElements(…) or glDrawArrays(…) (the latter doesnt need an “element buffer”)

http://www.opengl-tutorial.org/beginners-tutorials/tutorial-2-the-first-triangle

Actually it’s not z=f(x,y).
I obtain signal intensity Y as a function of frequency, x, every z milliseconds (50 - 100). The Y data is the obtained from a fast fourier transform of IQ data from a software defined radio.
So I want to plot this as I obtain this. If you search google images for 3d spectrogram you will see some examples. Here’s one:
https://www.youtube.com/watch?v=hiflzRL7sUY
Currently I plot the 16384 intensity bins of the FFT vs frequency. This will give you a plot of signals throughout a range of frequencies. Imagine if your FM radio showed a graph where all the peaks represented FM radio stations at 97.7, 103.5 etc…you could see them on the graph in real time.
There are no code examples of this on the internet. None. There are plenty of graphing packages that will to this at a cost of $1000 and over. So, since I don’t know much about openGL or even graphics this is a new experience for me.
So if you look at this in real time, you need to get two sets of data at two different times to get the first set of triangles. You then need to add the next set in front of all this so that the resulting spectrum moves away from you.

[QUOTE=tomb18;1285145]Actually it’s not z=f(x,y).
I obtain signal intensity Y as a function of frequency, x, every z milliseconds (50 - 100). The Y data is the obtained from a fast fourier transform of IQ data from a software defined radio.
[/QUOTE]
IOW, it is z=f(x,y), where x=frequency, y=time, z=amplitude.

So if you look at this in real time, you need to get two sets of data at two different times to get the first set of triangles. You then need to add the next set in front of all this so that the resulting spectrum moves away from you.

Circular buffer. At each time interval you replace the oldest row with the latest data. At any point in time, the data in the buffer is split into two regions: one from the oldest row to the end of the buffer, the other from the start of the buffer to the newest row. These two regions need to be drawn separately.

If you’re new to OpenGL, I suggest you start by getting the basic concept working, i.e. drawing everything with two calls to glDrawElements(). Then you can look into using buffer objects and shaders to improve the performance.

Alternatively, put the spectrogram on hold for now and look for a tutorial (or a book) which deals exclusively with modern OpenGL (no glBegin/glEnd or fixed-function pipeline; anything which uses functions not listed in the OpenGL 4 reference pages is using legacy features.

Hi,
Thanks, I’m starting on the glDrawElements.

Thanks, it’s a big landscape so these hints are great.
Tom

here is an example how to get started:
https://sites.google.com/site/john87connor/graph-plotting-1/graph-plotting

plotting 2D grids or 2D surfaces in 3D is easier with an “element buffer” + glDrawElements(…)

HI,
I’ve made some progress. As a starting point I’m now using glDrawArray, where I now have one gl call instead of thousands…
I am however getting hung up on assigning colors to the triangles. One of the issues is that I am doing this in C# and not C++ so it’s not always 100% clear. As far as I understand, I need to create a shader program and use files for shaders and vertices. Tis is not possible since I will be getting data in real time. The colors are an issue to. There will be a different color for each vertex depending on the amplitude of the signal. I tried using

        colorBufferArray.Bind(gl);
        gl.DrawArrays(OpenGL.GL_COLOR, 0, 4096 * 6);
        colorBufferArray.Unbind(gl);   

and everything is still white. Of course this doesn’t use shaders and the color values are generated each time a new data set arrives.
Any suggestions?
Thanks

take a look at this example:
https://sites.google.com/site/john87connor/graph-plotting-1/4-3d-graph-points

for “modern OpenGL” you have to:

  1. put your data into buffer objects (VBO) (or texture objects)
  2. tell OpenGL how to read those buffers, therefore you need vertex array objects (VAO)
  3. to draw the bunch of data, you need a program object (which is made up by at least 2 shaders: vertex / fragment)

thats the basic. it doesnt matter wheater you draw simple static triangles or complex graphs in 3D

virtually all the math is done in the vertex shader, it puts each point in its correct place on the screen
the fragment shader basically gives each pixel covered by you graph a color

if you call “glDrawArrays(…)”, OpenGL starts reading your VBO, and sends te data according to the settings o your VAO to the program object


the example above divides the application into 3 main phases:

  1. Initialize()
  2. Render()
  3. CleanUp()

your c# application, too, has to do some initializing, and some cleanup when the app terminates, and to do some update / drawings while the app is running

initialize:
– build shaders / program object, VAO + VBO + allocate memory for 1 set of signal data (lets say array<vec3, 100>)

while running:
– each frame, render the buffers content
– each second, put new signal data into the buffer object

terminate:
– release shaders / program object, VAO + VBO


now your goal is (i assume) to show the signal + previous (lets say 9) signals to see how it behaves
so in total you have to draw 10 signals, one beside the next (3rd dimension)
that means you have to allocate 10x array<vec3, 100>, each second you update the oldst signal and override it with the new incoming signal (as GClements said: a “ring buffer”)


regarding color:
you can process the color directly in the vertex shader, and send the result to the fragment shader


if you dont use shaders, the “default” program (0) only renders without colors (black/white) in screen space
thats of course not the solution to your problem, start with a simple app that uses 1 program + 1 VAO + 1 VBO

This is great!
Based on your description and an example for SharpGL “Modern GL” I now have a basic modern GL app that draws a colored square on the screen.
Next step is understanding each and every part of the code before continuing. I can then move forward with the 3D as per your example.
Sometime I wish I learned C++ instead of C#, it doesn’t seem they have as many examples out there.
Thanks again, I’ll probably be back again (not probably, definitely I’m sure!)
Best regards, Tom

If you’re using glVertexPointer() for the vertex positions, you’d use glColorPointer() for the colours. Both arrays need the same number of elements (one vector = 3 values for each vertex).

You don’t need to use files. It’s common for shader source code to be read in from a text file (because that makes editing easier, compared to using a string literal), but that’s not necessary.

This is meaningless. You don’t draw colours; you draw triangles/quads/etc. To specify colours for vertices, use glColorPointer() and glEnableClientState(GL_COLOR_ARRAY).

With modern OpenGL, colours are just another vertex attribute. You’d replace both glVertexPointer() and glColorPointer() with glVertexAttribArray(), and glEnableClientState() with glEnableVertexAttribArray(). However: if colour is based upon height, there’s no need to specify both the position and colour for each vertex; you can just specify the position and determine the colour from the Z coordinate (either using an arithmetic expression or using a 1-D texture as a palette).

Hi,
Ok I’m starting to get somewhere but the real difficulty that I have is that i’m coding in C# not C++.
Initially i started using sharpGL. The problem with sharpGL is that documentation is pretty well non-existent. Lots of examples but if they are not explained this gets you nowhere. Not to mention that the examples are very loose as to what they call VAO and VAB. So I started looking elsewhere and I found opengl4csharp. The nice thing is that there are a number of tutorials on youtube that pretty well explain how to do everything and it is quite close to the C++ examples I have seen. They are great. But, and it’s a big but, it uses FreeGlut and there seems to be a compatibility issue on Windows if you try to use a 64bit version of this. I must have both 32 bit and 64 bit apps. So I cannot use this one. So I am back to sharpGL.
So I am trying to use sharpGL but follow the tutorials for opengl4csharp and the differences are significant.
But now, thanks to the help here, I get the idea, and am now moving on to glDrawElements…
Thanks, Tom

I have now figured out the whole VBO, VBA, shaders etc. I have a basic program that plots pyramid and I can rotate this with the mouse. I have started adding axis, to my graph and I have run into a snag with rotation of the axis and the pyramid. It seems that both rotate independently around their centers.
I do the following:

program[“model_matrix”].SetValue(Matrix4.CreateRotationY(yangle) * Matrix4.CreateRotationX(xangle));

Draw my pyramid using gDrawElements
Draw my axis using glDraw Elements as well

Why should they rotate independently? How would I make things so that the center of my x,y,z graph be the center of rotation?
Thanks

[QUOTE=tomb18;1285186]How would I make things so that the center of my x,y,z graph be the center of rotation?
Thanks[/QUOTE]

i assume you want to rotate the camera around your (static) scene, in other words: your static coordinatesystem + graph

there are 3 different kind of 4x4 matrices:

  1. “Model-To-World” matrix

  2. “View” matrix

  3. “Projection” matrix

  4. sets an object to a global position with a certain orientation / scale

  5. sets your camera to a global position with a certain orientation

  6. sets your view frustum

http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/

a more complex / detailed description can be found here:
http://www.learnopengl.com/#!Getting-started/Coordinate-Systems

or google for “viewing pipeline” for more information

this is how you combine them into the final “MVP”

mat4 MVP = Projection * View * Model;

the last step is to just multiply your models vertices with that MVP matrix


to answer your question:

use “mat4(1)” or “identity matrix” as your model matrix (that means no translation / rotation at all)
just translate / rotate your camera (“view matrix”) if you want to virtually “move” around your scene

Got it. I’ve come pretty far! I have two vertex and 2 fragment shaders, I’m getting along quite well.
A couple of more questions that have come up.
The axis lines need antialiasing. I understand that the most general approach is to use MSAA. How is this done in modern OpenGL? I haven’t found any examples.

http://www.learnopengl.com/#!Advanced-OpenGL/Anti-Aliasing

this tutorial uses “glfw” to create a window (+ a default framebuffer)
i assume you dont use C# + glfw, so you have to figure out how to tell your window manager to use multiple samples / pixel

or you use your own “framebuffer object” (FBO) (which doesnt contain a framebuffer but manages so-called attachments)
to render your scene [b]offscreen /b, you have to attach a multisampled texture (or renderbuffer) to your FBO, and bind it
then you call glBlitFramebuffer(…) to copy the (multisampled) rendered image into the your window’s framebuffer (= on screen)

https://sites.google.com/site/john87connor/framebuffer/tutorial-10-6-framebuffer-multisample

Assuming that you are indeed using C#, you may want to consider taking a step back or two and learning C++. I do both and I love both. But most of the tutorials and such that you see for OGL are written in C++ and personally I think C++ is a more natural choice for OGL.

I think C# is a wonderful step towards learning C++. I learned it backwards starting with C and then discovering that C# is so similar that combined with a little knowledge of Visual Basic .Net that I had, I already “knew” C# and didn’t have to “learn” it. I just woke up one morning and started writing C# code. Needless to say, over the years I got better at C# and it really helped me to think “object oriented”, which was very different from what I had learned up to that point having done more traditional inline coding my whole life even though I had learned the basic principles of Object Oriented Programming (OOP) in a couple different languages at that point. C# really changed my whole mindset about OOP. So, when I came back to C++ (I was taught standard ANSI C in college but had been using Visual C++ all along at home), OOP in C++ made far better sense. So, I think having a background in C# (or maybe Java or some extremely OOP language) makes you ready to learn C++ and do well with it.

If you’ve done a lot of C#, then I’m sure you’re familiar with List<>. STL has std::vector<> which is basically the same thing. STL has std::string to make C++ string’s easier than trying to do nul terminated arrays as strings. STL has std::map<> which I believe is similar to .Net’s Dictionary<>. After you read a book on C++, STL will give you back a lot of what you lost going from C#.Net to C++ and it’s now pretty much an official part of C++. Other libraries like GLM, GLFW, GLEW, and FreeImage will help things along substantially as well even if they are not a standard part of C++ (although they are pretty standard for OGL).

The differences between C++ and C# are that C++ does not have access to the .Net library in unmanaged code (you can do CLR programming, but that’s another subject), you have to learn to write unmanaged code which means you have to learn to be responsible for your own memory allocation and de-allocation, and pointers. Get a good dedicated instructional book on C++ pointers; it’s such a difficult subject that you really need to study it indepth and focus on it for a bit to really “get it”. Such a book should probably get into memory allocation and deallocation as well. And there are other libraries that do some of what the .Net library did. You’ll want to look at GLM and learn the basics of STL.

You would probably want to spend a little dedicated time on this by maybe reading a book on C++, STL, and pointers (probably in that order), but you could probably learn as you go since you already are comfortable in C#.

Anyway, it’s something to consider. You can always go back to C# if you like. Worst case, you would then be able to read C++ OGL code examples better even if you decided C++ was not what you want to do.

I started 3D game programming with C# and XNA and still love it. I’m a huge fan of C#. I still use it when I think the project is best done in C#, for example I wrote a C# program to read my binary model files for my 3D models which is basically serialized data from my C++ model class in binary form (almost completely un-human readable). I was able to whip up a file browser in C# in no time to help me read my files for debugging purposes on my C++ project. I also have been known to prototype projects in C# and XNA such as when I built my own model exporter in Python from Blender. And I wrote a C#/XNA program to play back humanoid armature animation data from Blender and display it as an animated stick figure while learning how to do skinned animation from scratch for my C++ basic game engine.

But as much as I love C#, I prefer to do OGL in C++. I think C++ will open up a lot of additional options for OGL and if nothing else, you’ll find most examples and tutorials for OGL written in C++. And you can always go back to C#. I go back and forth as much as I find it helpful to do so.

[QUOTE=john_connor;1285205]http://www.learnopengl.com/#!Advanced-OpenGL/Anti-Aliasing

this tutorial uses “glfw” to create a window (+ a default framebuffer)
i assume you dont use C# + glfw, so you have to figure out how to tell your window manager to use multiple samples / pixel

or you use your own “framebuffer object” (FBO) (which doesnt contain a framebuffer but manages so-called attachments)
to render your scene [b]offscreen /b, you have to attach a multisampled texture (or renderbuffer) to your FBO, and bind it
then you call glBlitFramebuffer(…) to copy the (multisampled) rendered image into the your window’s framebuffer (= on screen)

https://sites.google.com/site/john87connor/framebuffer/tutorial-10-6-framebuffer-multisample[/QUOTE]

Hmmm, I followed the first link and found a tutorial on enabling multisampling in C# and FreeGlut and it doesn’t seem to anything to some lines created with glDrawElements.
This is what I put in place:

        // enable blending
        Gl.Enable(EnableCap.Blend);
        Gl.Enable(EnableCap.ProgramPointSize);
        Gl.Enable(EnableCap.Multisample);
        Gl.BlendFunc(BlendingFactorSrc.SrcAlpha, BlendingFactorDest.One);

I’m sure I’m doing something wrong here.
Thanks

blending and multisampling are 2 different things
just enabling multisamplng actually does nothing, you have to use a multisampled framebuffer attachment, if you take a look at the 2nd link above:

step 0: (initialize)
– building a multisampled framebuffer
– glEnable(GL_MULTISAMPLE)

step 1: (render)
– glBindFramebuffer(GL_FRAMEBUFFER, myframebuffer);
– all subsequent commands will render stuff NOT into the window, but into my offscreen framebuffer

step 2: (render)
– stuff gets rendered

step 3: (render)
– glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
– all subsequent draw-commands will render stuff into the window again
– glBindFramebuffer(GL_READ_FRAMEBUFFER, myframebuffer);
– all subsequent read-commands will read stuff from my offscreen framebuffer
– glBlitFramebuffer(…)
– that is a “draw” command, it copies the content of myframebuffer into the window (which is framebuffer = 0)

step 1, 2, 3 have are (almost) always the same when using FBOs, wheater you use multisampling or not

if you want to not just copy your FBO content into the window, but sample from the texture attachment while doing some postprocessing, you’d have to use another read-function in your fragmentshader:
instead of “texture(mysampler, …)” you have to use “texelFetch(mysampler, …, sample)” where sample is a number up to your textures sample count per texel

https://www.opengl.org/sdk/docs/man/html/texelFetch.xhtml