Android Question [SOLVED]i need help with opengl drawline measurements

Mrjoey

Active Member
Licensed User
Longtime User
hello , im using OpenGl 1.7 , im trying to draw lines but the values of the vertices are float therefore when i draw a line x=0 , y = 1 , the line is different when u draw it using canvas , i mean in opengl 0 is in the center of the screen , but on canvas 0 is the top of the screen , i tried to mess around the camera view but im stuck , i see gl.GL_SHORT but cant see integer option , and how to use it , is there any possible way to fix measurements? my activity.width is 720 , so i want 0 in opengl to be the top left and i want to draw a line from 0 to 720 , so the values have to be in integer , and here s my code :
B4X:
Sub glsv_Draw(gl As GL1)
    ' As this is running on a separate thread exceptions will cause the application to force close
    ' and not report the exception as would happen on the main thread so we use a Try Catch block
    ' to trap any errors
   
        'The view wants to be drawn using the suppllied GL1
       
        gl.glClear(Bit.OR(gl.GL_COLOR_BUFFER_BIT, gl.GL_DEPTH_BUFFER_BIT))
        gl.glMatrixMode(gl.GL_MODELVIEW)
        gl.glLoadIdentity
        gl.gluLookAt(0,0,5,0,0,0, 0,1,0)
        'gl.glTranslatef(-int2float(160*GetDeviceLayoutValues.Scale*1000) ,0,-int2float(SeekBar1.Value))
        'gl.glOrthof(-int2float(Panel1.Width*2),int2float(Panel1.Width) ,-20,20,-1,3)
        'gl.glOrthof(-int2float(SeekBar1.Value),int2float(SeekBar1.Value),-20 ,20,-1,3)
        'gl.glRotatef((eyeangle/(2*3.14159))*360,0,1,0)
        'gl.glRotatef((eyeangle/(2*3.14159))*360,1,0,0)
   
        gl.glEnableClientState(gl.GL_VERTEX_ARRAY)
        gl.glVertexPointerf(2, verts)
        'gl.glColor4f(1,1,1,1)
        gl.glDrawArrays(gl.GL_LINE_STRIP,0,(vn/2))
       
End Sub
Sub glsv_SurfaceChanged(gl As GL1, width As Int, height As Int)
   
        'Called when the surface has changed size.
        gl.glViewport(0, 0, width, height)
        gl.glMatrixMode(gl.GL_PROJECTION)
        gl.glLoadIdentity
        Dim ratio As Float
        ratio = width/height
        gl.gluPerspective(45, ratio,1,100)
End Sub
Sub glsv_SurfaceCreated(gl As GL1)
    'Called when the surface is created or recreated.
    'Log("Created")
    'Try
        gl.glEnable(gl.GL_DEPTH_TEST)
        gl.glDepthFunc(gl.GL_LESS)
        gl.glEnableClientState(gl.GL_VERTEX_ARRAY)
        Dim lightAmbient(0),lightDiffuse(0),lightPos(0) As Float
        lightAmbient=Array As Float(0.2,0.2,0.2,1)
        lightDiffuse=Array As Float(1,1,1,1)
        lightPos=Array As Float(1,1,1,1)
        gl.glEnable(gl.GL_LIGHTING)
        gl.glEnable(gl.GL_LIGHT0)
        gl.glLightfv(gl.GL_LIGHT0,gl.GL_AMBIENT,lightAmbient,0)
        gl.glLightfv(gl.GL_LIGHT0,gl.GL_DIFFUSE,lightDiffuse,0)
        gl.glLightfv(gl.GL_LIGHT0,gl.GL_POSITION,lightPos, 0)
        Dim matAmbient(0),matDiffuse(0) As Float
        matAmbient=Array As Float(1,1,1,1)
        matDiffuse=Array As Float(1,1,1,1)
        gl.glMaterialfv(gl.GL_FRONT_AND_BACK,gl.GL_AMBIENT,matAmbient,0)
        gl.glMaterialfv(gl.GL_FRONT_AND_BACK,gl.GL_DIFFUSE,matDiffuse,0)
       
    'Catch
        ' catch and report any exceptions on the rendering thread to the main thread
        ' set the glsv.DebugFlags to DEBUG_CHECK_GL_ERROR to raise an exception immediately on a GL error
        'GLException(gl)   
    'End Try
End Sub

i created a sub that converts integer values to float
B4X:
Sub int2float(In As Int)
Return (In / 65536)
End Sub

can u help me plz?tnx in advance.
 

Informatix

Expert
Licensed User
Longtime User
Thank you for that, it looks like an excellent performer. But I will be plotting lines using incoming data samples to generate the coordinates in a sort of graphing application. There will be possibly thousands of points spread over perhaps a dozen polylines. Other solutions that people have kindly put up either use the canvas or don't use vbo 's and on reading questions from users it appears they get bogged down when the data points start to get large. Therefore my plan was to get the data into gpu memory and use the shaders to zoom and pan after my data stream is finished. Perhaps I am taking the wrong approach, and any advice you can offer is very much appreciated. Is it feasible to draw arbitrary lines into a texture? My intensive reading on the subject made me think not, but documentation is always for something someone else needs, and I always want to do something nobody ever wrote about. :)
1) Did you see another application on Android able to display what you expect? Android devices are not very powerful compared to a PC, so your expectations may be a bit too high.
2) Do you accept a delay between the reception of data and their display? If yes, you can plot your graph to a Pixmap then create a texture from this Pixmap for display. You will pan and zoom a texture, which is faster than redrawing always the same points and lines.
3) Shaders have to be very optimized on an Android device to be fast, so if you can do something without using them, it's better.
4) My experience with OpenGL is rather limited. You probably better know than me how to use VBOs.
 
Upvote 0

Informatix

Expert
Licensed User
Longtime User
I experimented with the library and found it to be very good. I did ask the developer if ShapeRenderer used VBO's, and his answer was no (I already knew that from looking at the source, but he confirmed it.)

That's one of the reasons why I recommended to use the Sprite batcher instead of the Shape renderer. It uses VBOs if they are available.

Also, I've been thinking about using SpriteRenderer to make 'pretend' lines. Seems it would require some interesting manipulations to control the image when zooming and panning, no? Is this just a limitation of my imagination?

I don't understand the question. What's a "pretend line"?
 
Upvote 0

John D.

Member
Licensed User
Longtime User
1) Did you see another application on Android able to display what you expect? Android devices are not very powerful compared to a PC, so your expectations may be a bit too high.
Thank you for your reply, Informatix.

There are some libraries out there designed for what I am doing, but they use canvas which is optimized for something different. According to questions in their forums apparently they bog down when the dataset gets to a few hundred Kb, and the trail ends there. There are many closed-source Android applications that do much more complicated line drawing than this. I think this is simply a case where I am the first to go through the trouble to do this task in Opengl (others who have done the really complicated things aren't publishing their code). Certainly the GPU (and its shaders) cycles through all the data on every draw, so if the vertices are held in GPU memory I expect that it should be no more load than processing bitmaps, but time will certainly tell (because Opengl is an interface specification, not a hardware or implementation spec, I have to try it to find out). Perhaps I am mistaken in this but I have punished myself pretty hard reading about how it works. Maybe when I get my example working I'll have something meaningful to contribute back to the community!

2) Do you accept a delay between the reception of data and their display?
I am limiting the speed of data streaming in. My intention is to let the GPU render at whatever speed it will. Let me describe my plan from a wider perspective, please tell me if it sounds reasonable to you:

I am monitoring a repeating process on a machine. I will be reading a stream of data representing, say, 4 or 5 channels. All the data will be formatted on the sending device, so I will receive it as a serial stream of ints or floats, the idea being to avoid doing type conversion on the Android device (but I must scale the values for display). I will have an arbitrarily assigned maximum number of samples for each 'cycle' (process time * sample rate). So there will be a fixed number of vertices for each line, which will determine the size of my array(s). At the beginning of the cycle, I will create the VBO arrays and assure that the Surface takes over the whole screen (and otherwise try to remove distractions for the processor). As data comes in, if new values are available when OnDraw gets called, I will scale and append data to the arrays using glBufferSubData and update glDrawArrays with the updated index where the end of my data is at the moment. (I will certainly optimize using DrawElements and an index array to avoid duplicating the X value, but later, different subject.) So my lines should advance across the screen from left to right as the cycle progresses. (I've made test programs that can do this up to this point, not scaled up yet though.) At the end of the machine cycle, I intend to let the user examine the dataset visually by zooming in to areas of the resulting graph image and panning around (for troubleshooting, etc). (Zoom and pan is my next task, I'm sure I can do it with Opengl ES 1. I failed badly trying to do it in ES 2.) My idea is that this approach puts the data into the native memory of the GPU, which is advertised to be much faster than having the GPU access main memory. The only work the shaders have to do will be involved in rendering -- there will be no data manipulation with custom shaders (all I need them for is zoom and pan). For the GPU, this should be no more workload than displaying a bitmap, if I am not mistaken. Once I get these basic functions working, obviously there are some 'nice to have' enhancements I will work in.

The data stream will be received every 1/10th second. Time will be kept on the sending device, and it will buffer data too. Data will consist of 4 or 5 'Y' values (plus a timestamp to be used for 'X'). This should be a modest requirement. To keep it asynchronous, I will keep track of what is received and what has been pushed to the GPU buffer, so if rendering hangs for some number of 'sample receive' cycles while the data is incoming, no problem, on the next OnDraw it's only a few more data points to update. (The raw data will be kept in arrays in main memory for other purposes so all this will be done by keeping track of a few index values.)

Does this sound reasonable?

you can plot your graph to a Pixmap then create a texture from this Pixmap for display.

I investigated this approach, but it seemed to me that if I wanted to display the graph 'live', then I would have to keep updating the image and reading the entire image into the GPU memory on every draw cycle. Please let me know if I am mistaken in this. Wouldn't this bog the display down? Seems to me the strategy in side-scrolling games is to get those textures loaded to the GPU in the beginning, and then try not to swap them out too often. The VBO approach should drastically reduce the data input to the GPU memory if I am not mistaken.

3) Shaders have to be very optimized on an Android device to be fast, so if you can do something without using them, it's better.

That's what I'm trying to do (but they are in there and the default shaders get used anyway, except there are no defaults in Opengl ES 2.)

4) My experience with OpenGL is rather limited. You probably better know than me how to use VBOs.
You are too modest. As for me, I have a different career, programming is a tool I use to Get Things Done on the way to realizing a machine design. LOL. But I am used to grinding away at a task until I either make things work the way I want or else determine that it is impossible. So hopefully my success or failure at this might be useful to an expert at this like yourself. I am not experienced at java or B4A, and there are thousands of little things I'm learning just to get basic things done in this environment, so your previous contributions here have already helped me immensely. (I try not to show it, but I hate java. That's why I came to B4A in the first place. Naturally because of that, I will not be able to avoid learning it despite my best attempts. And all the useful Opengl information is in C, and using other libraries besides, often for IOS, so it's C with other libraries (like GLUT) to java to B4A in order for me to figure anything out. Haha! I always end up plowing new ground for some reason, always using a different microcontroller or whatever, new architecture, new language, new IDE, different communications protocols, perhaps I just like to punish myself. I must be crazy. I never could have predicted I'd find myself in Android-land, LOL.)

What's a "pretend line"?
I envisioned putting sprites in sequence to represent a line. Probably I missed the point. :)

Thank you for your thoughts, sir.

--John--
 
Last edited:
Upvote 0

Mrjoey

Active Member
Licensed User
Longtime User
Guys watch this videos , very helpful , very educational , but the reporter is little bit talking fast but im sure u will learn how to use vbo , i didnt expect it so easy heres the link :Hope it will help
 
Upvote 0

Informatix

Expert
Licensed User
Longtime User
Does this sound reasonable?

Yes, it is reasonable. Look at the attached example. I plot 1 million of data (at a rate of 10 per sec = 27 hours of data) in real time. Move your finger to the left or to the right to zoom in/out. The bottleneck here is not the time spent to draw the data, but to read them (on my Nexus 7, reading 1 million of data from my list for the maximum zoom level takes up to 2 sec.). Is your problem also the time to read them from memory because that's not totally clear for me?
By the way, I used in this example a texture with the sprite batcher to draw the lines.
 

Attachments

  • DataPlotter.zip
    6.7 KB · Views: 247
Upvote 0

John D.

Member
Licensed User
Longtime User
Wow, thank you for doing that! I didn't expect you to go to all that effort.

I can't find colorpatch.png, though.

Reading through the code ... it is stunning how much LibGDX simplifies things.

--John--
 
Last edited:
Upvote 0

John D.

Member
Licensed User
Longtime User
This is an awesome demo, Informatix.

B4X:
Renderer.Color = Renderer.Color.GREEN
            Renderer.DrawTex2(texColorPatch, posX - lGdx.Graphics.Width / 2, - lGdx.Graphics.Height / 2, 1, PeekValue)
            Renderer.Color = Renderer.Color.WHITE
            Renderer.DrawTex2(texColorPatch, posX - lGdx.Graphics.Width / 2, - lGdx.Graphics.Height / 2, 1, AverageValue)
            Renderer.Color = Renderer.Color.CYAN
            Renderer.DrawTex2(texColorPatch, posX - lGdx.Graphics.Width / 2, - lGdx.Graphics.Height / 2, 1, MinValue)

I wonder why my device only displays the CYAN (the last DrawTex2), is it the same on yours?
Edit: Nevermind, figured it out.
 
Last edited:
Upvote 0

John D.

Member
Licensed User
Longtime User
Is your problem also the time to read them from memory because that's not totally clear for me?
I see we are viewing the issue from different angles... :)

In this example I'm not sure if the blocking is occurring because the GPU is reading the entire array on every draw call, but if so then that is what the VBO is designed to address. My readings on the subject have led me to believe that a) opengl cannot deal with a List or dynamically sized array in its own memory space so it has to read the entire array on every frame; and b) these copy operations are 'expensive'.

Edit: This example explicitly copies the data in on every frame.

Also, your earlier question about my reference to a "pretend line" is now clear to me:

Say we have 3 data points, each assigned a pair of coordinates, let's call them A, B, and C. The task is to draw a line from A to B and another from B to C, to render a display like an oscilloscope. Although the way of visualization in your example is certainly another way to get it done. So it seemed a natural approach to put my points in a vertex buffer object and let opengl draw the lines. Then pan and zoom with the 'camera'.

Your example shows that for the amount of data I'm going to be displaying, speed is not as much of an issue as I thought it would be.

Thank you for putting your efforts into this, I'll do some more experimenting and will report back anything I learn that might be of value.

--John--
 
Last edited:
Upvote 0

John D.

Member
Licensed User
Longtime User
Using this method my device draws 100,000 points in about 200 milliseconds per frame.
 
Last edited:
Upvote 0

Informatix

Expert
Licensed User
Longtime User
In this example I'm not sure if the blocking is occurring because the GPU is reading the entire array on every draw call, but if so then that is what the VBO is designed to address. My readings on the subject have led me to believe that a) opengl cannot deal with a List or dynamically sized array in its own memory space so it has to read the entire array on every frame; and b) these copy operations are 'expensive'.

Edit: This example explicitly copies the data in on every frame.

My example is not optimized and that's obvious when you draw the data at the maximum zoom level; it has to read one million bytes from a list each rendering cycle and the loop itself introduces a slowdown. To optimize this code, there are a few things to do: use something else than the B4A list object to hold the data (maybe the Array type of libGDX is faster to retrieve and store the data; didn't try); compute the data to plot when you change the zoom level, store the result in a buffer and read only this buffered data when you draw so your loop will be constant: 3 * 100%x.
 
Last edited:
Upvote 0
Top