I have created a scatter plot using Core Plot. My graph however needs to be refreshed dynamically (points are constantly being added and removed). I need the plot to be fluent and appear to be "sliding across the graph". Instead I seem to be getting a choppy line that adds several values at once, freezes and the again adds several values. What could be causing this behaviour?
-(void)updateDataWithVal:(double)percentageUsage
{
if ([self.graphData count] >= 10)
{
[self.graphData removeLastObject];
}
[self.graphData insertObject:[NSNumber numberWithDouble:percentageUsage] atIndex: 0];
[self.graph reloadData];
}
Above is the function that is called every time I want the graph to change. The problem isn't with the data being updated. I debugged the function and noticed that the data is being updated fluently (a point is added and removed from the data array per second). The problem is with the graph actually changing. What could be causing the graph to freeze and adding several points at once (every 6-7 seconds) instead of continuously updating every second like the data is?
I doubt this is being caused by adding to many points in a short interval. Only one point is removed and added per second. Additionally, my graph has only one plot.
My graph is running on OSX not iOS. All code is in Objective-C.
As requested, I can convert my comments into an answer so that this can closed out.
Core Plot graphs are heavily reliant on display elements, so any updates to them must be performed on the main thread. Otherwise, you will see odd rendering behavior like inconsistent updates, visual artifacts, and your application most likely will crash at some point.
I have to do the same thing that you describe within one of my Mac applications. For this, I use a background GCD queue to handle the data acquisition and processing to avoid blocking the main thread. However, every time that I need to insert the results into the graph and have it update, I use dispatch_async() to wrap the appropriate code in a block to be performed on the main thread. This should protect you against rendering oddities like what you see here.
Related
I have a labview program where I am collecting data at 2 Hz. I have 8 channels of data I need to plot on a waveform chart. However, due to the program needing to be ran for long periods of time, I run into issues with memory and storing all the data on the chart. I would like to have it be a user input update frequency, but I cannot figure out how to do it. I tried passing the data in through a loop, but it would never execute.
To paint a clearer picture, I want to plot every other data point or further in between. I don't need all the data points on the plot.
You can utilize the master-slave code setup and have an event triggered update. If you need to you can create a global var file to store your data, then when you trigger the update it will read it from that.
I tend to always separate the UI in labview into its' own thread this way and it works well for what you're describing.
Update: For anyone who stumbles upon this, it seems like SceneKit has a threshold for the maximum number of objects it can render. Using [SCNNode flattenedClone] is a great way to help increase the amount of objects it can handle. As #Hal suggested, I'll file a bug report with Apple describing the performance issues discussed below.
I'm somewhat new to iOS and I'm currently working on my first OS X project for a class. I'm essentially creating random geometric graphs (random points in space connected to one another if the distance between them is ≤ a given radius) and I'm using SceneKit to display the results. I already know I'm pushing SceneKit to its limits, but if the number of objects I'm trying to graph is too large, the whole thing just crashes and I don't know how to interpret the results.
My SceneKit scene consists of the default camera, 2 lighting nodes, approximately 5,000 SCNSpheres each within an SCNNode (the nodes on the graph), and then about 50,0000 connections which are of type SCNPrimitiveSCNGeometryPrimitiveTypeLine which are also within SCNNodes. All of these nodes are then added to one large node which is added to my scene.
The code works for smaller numbers of spheres and connections.
When I run my app with these specifications, everything seems to work fine, then 5-10 seconds after executing the following lines:
dispatch_async(dispatch_get_main_queue(), ^{
[self.graphSceneView.scene.rootNode addChildNode:graphNodes];
});
the app crashes with this resulting screen: .
Given that I'm sort of new to Xcode and used to more verbose output upon crashing, I'm a bit over my head. What can I do to get more information on this crash?
That's terse output for sure. You can attack it by simplifying until you don't see the crash anymore.
First, do you ever see anything on screen?
Second, your call to
dispatch_async(dispatch_get_main_queue(), ^{
[self.graphSceneView.scene.rootNode addChildNode:graphNodes];
});
still runs on the main queue, so I would expect it to make no difference in perceived speed or responsiveness. So take addChildNode: out of the GCD block and invoke it directly. Does that make a difference? At the least, you'll see your crash immediately, and might get a better stack trace.
Third, creating your own geometry from primitives like SCNPrimitiveSCNGeometryPrimitiveTypeLine is trickier than using the SCNGeometry subclasses. Memory mismanagement in that step could trigger mysterious crashes. What happens if you remove those connection lines? What happens if you replace them with long, skinny SCNBox instances? You might end up using SCNBox by choice because it's easier to style in SceneKit than a primitive line.
Fourth, take a look at #rickster's answer to this question about optimization: SceneKit on OS X with thousands of objects. It sounds like your project would benefit from node flattening (flattenedClone), and possibly the use of SCNLevelOfDetail. But these suggestions fall into the category of premature optimization, the root of all evil.
It would be interesting to hear what results from toggling between the Metal and OpenGL renderers. That's a setting on the SCNView in IB ("preferred renderer" I think), and an optional entry in Info.plist.
I’m writing an interactive application using wxPython and matplotlib. I have a single graph embedded in the window as a FigureCanvasWxAgg instance. I’m using this graph to display some large data sets, between 64,000 and 512,000 data points, so matplotlib’s rendering takes a while. New data can arrive every 1–2 seconds so rendering speed is important to me.
Right now I have an update_graph_display method that does all of the work of updating the graph. It handles updating the actual data as well as things like changing the y axis scale from linear to logarithmic in response to a user action. All in all, this method calls quite a few methods on my axes instance: set_xlim, set_ylabel, plot, annotate, and a handful of others.
The update_graph_display is wrapped in a decorator that forces it to run on the main thread in order to prevent the UI from being modified from multiple threads simultaneously. The problem is that all of this graph computation and drawing takes a while, and since all of this work happens on the main thread the application is unresponsive for noticeable periods of time.
To what extent can the computation of the graph contents be done on some other thread? Can I call set_xlim, plot, and friends on a background thread, deferring just the final canvas.draw() call to the main thread? Or are there some axes methods which will themselves force the graph to redraw itself?
I will reproduce #tcaswell’s comment:
No Axes methods should force a re-draw (and if you find any that do please report it as a bug), but I don't know enough about threading to tell you they will be safe. You might get some traction using blitting and/or re-using artists as much as possible (via set_data calls), but you will have to write the logic to manage that your self. Take a look at how the animation code works.
I have tweeted an image illustrating the problem with Flex ColumnSeries on a PlotChart when trying to overlay one on top of another.
Essentially, it can display one series alright, two or more OK on initialization, but after a bit of manipulation (in the user session), the columns lose their sense of where zero is, and begin to float (these series have no minfield, thus zero is their starting point). FWIW: the axis for these columns is on the right, but that can change given the type of data displayed.
The app this is for allows users to turn multiple series of multiple plotting styles on and off, change visual parameters, and even the order in which the series stack on top of each other -- just to give you an idea of what's going on.
Due to how dynamic this all is, I am doing most of the code in ActionScript.
So the questions are:
Is this fixable? Googling around has provided no insights, regardless of inquiry.
Is there a refresh function or equivalent within PlotChart/CartesianCharts that may help?
May this not be a problem with the chart canvas, but more of the axis which the series points to? or the series itself?
If it has not been made clear already: I am lost on this. The issue I have known about for ~a year now was first discovered on a Beta version of the app I am working on now, but it took a while for it to surface in an average user session. As the complexity of the app has grown (by client demand), the issue takes a lot less time to surface.
The issue also occurs on all versions of Flex I have used: 4.5, 4.6, 4.9... etc.
Please help, or offer pointers. Thanks!
This is what happens:
The drawGL function is called at the exact end of the frame thanks to a usleep, as suggested. This already maintains a steady framerate.
The actual presentation of the renderbuffer takes place with drawGL(). Measuring the time it takes to do this, gives me fluctuating execution times, resulting in a stutter in my animation. This timer uses mach_absolute_time so it's extremely accurate.
At the end of my frame, I measure timeDifference. Yes, it's on average 1 millisecond, but it deviates a lot, ranging from 0.8 milliseconds to 1.2 with peaks of up to more than 2 milliseconds.
Example:
// Every something of a second I call tick
-(void)tick
{
drawGL();
}
- (void)drawGL
{
// startTime using mach_absolute_time;
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
// endTime using mach_absolute_time;
// timeDifference = endTime - startTime;
}
My understanding is that once the framebuffer has been created, presenting the renderbuffer should always take the same effort, regardless of the complexity of the frame? Is this true? And if not, how can I prevent this?
By the way, this is an example for an iPhone app. So we're talking OpenGL ES here, though I don't think it's a platform specific problem. If it is, than what is going on? And shouldn't this be not happening? And again, if so, how can I prevent this from happening?
The deviations you encounter maybe be caused by a lot of factors, including OS scheduler that kicks in and gives cpu to another process or similar issues. In fact normal human won't tell a difference between 1 and 2 ms render times. Motion pictures run at 25 fps, which means each frame is shown for roughly 40ms and it looks fluid for human eye.
As for animation stuttering you should examine how you maintain constant animation speed. Most common approach I've seen looks roughly like this:
while(loop)
{
lastFrameTime; // time it took for last frame to render
timeSinceLastUpdate+= lastFrameTime;
if(timeSinceLastUpdate > (1 second / DESIRED_UPDATES_PER_SECOND))
{
updateAnimation(timeSinceLastUpdate);
timeSinceLastUpdate = 0;
}
// do the drawing
presentScene();
}
Or you could just pass lastFrameTime to updateAnimation every frame and interpolate between animation states. The result will be even more fluid.
If you're already using something like the above, maybe you should look for culprits in other parts of your render loop. In Direct3D the costly things were calls for drawing primitives and changing render states, so you might want to check around OpenGL analogues of those.
My favorite OpenGL expression of all times: "implementation specific". I think it applies here very well.
A quick search for mach_absolute_time results in this article: Link
Looks like precision of that timer on an iPhone is only 166.67 ns (and maybe worse).
While that may explain the large difference, it doesn't explain that there is a difference at all.
The three main reasons are probably:
Different execution paths during renderbuffer presentation. A lot can happen in 1ms and just because you call the same functions with the same parameters doesn't mean the exact same instructions are executed. This is especially true if other hardware is involved.
Interrupts/other processes, there is always something else going on that distracts the CPU. As far as I know the iPhone OS is not a real-time OS and so there's no guarantee that any operation will complete within a certain time limit (and even a real-time OS will have time variations).
If there are any other OpenGL calls still being processed by the GPU that might delay presentRenderbuffer. That's the easiest to test, just call glFinish() before getting the start time.
It is best not to rely on a high constant frame rate for a number of reasons, the most important being that the OS may do something in the background that slows things down. Better to sample a timer and work out how much time has passed each frame, this should ensure smooth animation.
Is it possible that the timer is not accurate to the sub ms level even though it is returning decimals 0.8->2.0?