I've been using oxyplot for a month now and I'm pretty happy with what it delivers. I'm getting data from an oscilloscope and, after a fast processing, I'm plotting it in real time to a graph.
However, if I compare my application CPU usage to the one provided by the oscilloscope manufacturer, I'm loading a lot more the CPU. Maybe they're using some gpu-based plotter, but I think I can reduce my CPU usage with some modifications.
I'm capturing 10.000 samples per second and adding it to a LineSeries. I'm not plotting all that data, I'm decimating it to a constant number of points, let's say 80 points for a 20 secs measure, so I have 4 points/sec while totally zoomed out and a bit more detail if I zoom in to a specific range.
With the aid of ReSharper, I've noticed that the application is calling a lot of times (I've 6 different plots) the IsValidPoint method (something like 400.000.000 times), which is taking a lot of time.
I think the problem is that, when I add new points to the series, it checks for every point if it is a valid point, instead of the added values only.
Also, it spends a lot of time in the MeasureText/DrawText method.
My question is: is there a way to override those methods and to adapt it to my needs? I'm adding 10.000 new values each second, but the first ones remain the same, so there's no need for re-validate them. Also, the text shown doesn't change.
Thank you in advance for any advice you can give me. Have a good day!
I am currently using OpenCV's built-in patch-based histogram back projection (cv::calcBackProjectPatch()) to identify regions of a target material in an image. With an image resolution of 640 x 480 and a window size of 10 x 10, processing a single image requires ~1200 ms. While the results are great, this far too slow for a real-time application (which should have a processing time of no more than ~100 ms).
I have already tried reducing the window size and switching from CV_COMP_CORREL to CV_COMP_INTERSECT to speed up the processing, but have not seen any appreciable speed up. This may be explained by the OpenCV documentation (emphasis mine):
Each new image is measured and then
converted into an image image array
over a chosen ROI. Histograms are
taken from this image image in an area
covered by a “patch” with an anchor at
center as shown in the picture below.
The histogram is normalized using the
parameter norm_factor so that it may
be compared with hist. The calculated
histogram is compared to the model
histogram; hist uses The function
cvCompareHist() with the comparison
method=method). The resulting
output is placed at the location
corresponding to the patch anchor in
the probability image dst. This
process is repeated as the patch is
slid over the ROI. Iterative histogram
update by subtracting trailing pixels
covered by the patch and adding newly
covered pixels to the histogram can
save a lot of operations, though it is
not implemented yet.
This leaves me with a few questions:
Is there another library that supports iterative histogram updates?
How significant of a speed-up should I expect from using an iterative update?
Are there any other techniques for speeding up this type of operation?
As mentioned in OpenCV Integral Histograms will definitely improve speed.
Please take a look at a sample implementation in the following link
http://smsoftdev-solutions.blogspot.com/2009/08/integral-histogram-for-fast-calculation.html
I have recently been developing an isometric, rendering system to map out 3D grids in Javascript. All of the items on the grid are cubes of equal dimensions, the only differences between each one is a texture to represent a value for that coordinate. My application requires large grids to be graphed, even though only a small portion is visible in the viewport at once.
Because I am using Canvas, which is slow to draw thousands of shapes per frame, I set my script to loop through each block but only draw its faces if they are 1.) next to an empty grid space and 2.) inside the viewport. This system works fine for smaller grids, but as my application will need considerably large ones (1000+x1000+x128), I will need to add some performance improvements for the final product.
Does anyone that has worked with rendering systems know any way I can further optimize my engine? One thing that I guess may be effective will be trying to not loop through each grid value, even if it is not being drawn. However, I do not know the most efficient way to know whether to loop through a grid value or not (I am currently going through EVERY value, then calculating whether it should be drawn).
If I have been too vague, please tell me and I will be happy to elaborate. Thank you for your time and expertise; I am a student and any help will greatly aid my learning.
Some pointers to you: you might want to have a look at classic culling algorithms using things like octree (or quadtrees in your case), ...
Summary
New to iPhone programming, I'm having trouble picking the right optimization strategy to filter a set of view components in a scrollview with huge content. In what area would my app gain the most performance?
Introduction
My current iPad app-in-progress let's users explore fairly large binary tree structures. The trees contain between 30 to 900 nodes, and when drawing inside a scrollview (with limited zoom) it looks like this.
The nodes' contents are stored in a SQLite backed Core Data model. It's a binary tree and if a node has children, there are always exactly two. The x and y positions are part of the model, as are the dimensions of the node connections, shown as dotted lines.
Optimization
Only about 50 nodes fit the screen at any given time. With the largest trees containing up to 900 nodes, it's not possible to put everything in a scrollview controlled and zooming UIView, that's a recipe for crashes. So I have to do per frame filtering of the nodes.
And that's where my troubles start. I don't have the experience to make a well founded decision between the possible filtering options, and in addition I probably don't know about that really fast special magic buried deep in Objective-C or Cocoa Touch. Because the backing store is close to 200 MB in size (some 90.000 nodes in hundreds of trees), it's very time consuming to test every single tree on the iPad device. Which is why I'd like to ask you guys for advice.
For all my attempts I'm putting a filter method in the scrollViewDidScroll: and scrollViewDidZoom:. I'm also blocking the main thread with the filter, because I can't show the content without the nodes anyway. But maybe someone has an idea in that area?
Because all the positioning is present in the Core Data model, I might use NSFetchRequest to do the filtering. Is that really fast though? I have the idea it's not a very optimized method.
From what I've tried, the faulted managed objects seem to fit in memory at once, but it might be tricky for the larger trees once their contents start firing faults. Is it a good idea to loop over the NSSet of nodes and see what items should be on screen?
Are there other tricks to gain performance? Would you see ways where I could use multi threading to get the display set faster, even though the model's context was created on the main thread?
Thanks for your advice,
EP.
Ironically your binary tree could be divided using Binary Space Partitioning done in 2D so rendering will be very fast performant and a number of check close to minimum necessary.
As the title says, I'm fleshing out a design for a 2D platformer engine. It's still in the design stage, but I'm worried that I'll be running into issues with the renderer, and I want to avoid them if they will be a concern.
I'm using SDL for my base library, and the game will be set up to use a single large array of Uint16 to hold the tiles. These index into a second array of "tile definitions" that are used by all parts of the engine, from collision handling to the graphics routine, which is my biggest concern.
The graphics engine is designed to run at a 640x480 resolution, with 32x32 tiles. There are 21x16 tiles drawn per layer per frame (to handle the extra tile that shows up when scrolling), and there are up to four layers that can be drawn. Layers are simply separate tile arrays, but the tile definition array is common to all four layers.
What I'm worried about is that I want to be able to take advantage of transparencies and animated tiles with this engine, and as I'm not too familiar with designs I'm worried that my current solution is going to be too inefficient to work well.
My target FPS is a flat 60 frames per second, and with all four layers being drawn, I'm looking at 21x16x4x60 = 80,640 separate 32x32px tiles needing to be drawn every second, plus however many odd-sized blits are needed for sprites, and this seems just a little excessive. So, is there a better way to approach rendering the tilemap setup I have? I'm looking towards possibilities of using hardware acceleration to draw the tilemaps, if it will help to improve performance much. I also want to hopefully be able to run this game well on slightly older computers as well.
If I'm looking for too much, then I don't think that reducing the engine's capabilities is out of the question.
I think the thing that will be an issue is the sheer amount of draw calls, rather than the total "fill rate" of all the pixels you are drawing. Remember - that is over 80000 calls per second that you must make. I think your biggest improvement will be to batch these together somehow.
One strategy to reduce the fill-rate of the tiles and layers would be to composite static areas together. For example, if you know an area doesn't need updating, it can be cached. A lot depends of if the layers are scrolled independently (parallax style).
Also, Have a look on Google for "dirty rectangles" and see if any schemes may fit your needs.
Personally, I would just try it and see. This probably won't affect your overall game design, and if you have good separation between logic and presentation, you can optimise the tile drawing til the cows come home.
Make sure to use alpha transparency only on tiles that actually use alpha, and skip drawing blank tiles. Make sure the tile surface color depth matches the screen color depth when possible (not really an option for tiles with an alpha channel), and store tiles in video memory, so sdl will use hardware acceleration when it can. Color key transparency will be faster than having a full alpha channel, for simple tiles where partial transparency or blending antialiased edges with the background aren't necessary.
On a 500mhz system you'll get about 6.8 cpu cycles per pixel per layer, or 27 per screen pixel, which (I believe) isn't going to be enough if you have full alpha channels on every tile of every layer, but should be fine if you take shortcuts like those mentioned where possible.
I agree with Kombuwa. If this is just a simple tile-based 2D game, you really ought to lower the standards a bit as this is not Crysis. 30FPS is very smooth (research Command & Conquer 3 which is limited to 30FPS). Even still, I had written a remote desktop viewer that ran at 14FPS (1900 x 1200) using GDI+ and it was still pretty smooth. I think that for your 2D game you'll probably be okay, especially using SDL.
Can you just buffer each complete layer into its view plus an additional tile size for all four ends(if you have vertical scrolling), use the buffer again to create a new buffer minus the first column and drawing on a new end column?
This would reduce a lot of needless redrawing.
Additionally, if you want a 60fps, you can look up ways to create frame skip methods for slower systems, skipping every other or every third draw phase.
I think you will be pleasantly surprised by how many of these tiles you can draw a second. Modern graphics hardware can fill a 1600x1200 framebuffer numerous times per frame at 60 fps, so your 640x480 framebuffer will be no problem. Try it and see what you get.
You should definitely take advantage of hardware acceleration. This will give you 1000x performance for very little effort on your part.
If you do find you need to optimise, then the simplest way is to only redraw the areas of the screen that have changed since the last frame. Sounds like you would need to know about any animating tiles, and any tiles that have changed state each frame. Depending on the game, this can be anywhere from no benefit at all, to a massive saving - it really depends on how much of the screen changes each frame.
You might consider merging neighbouring tiles with the same texture into a larger polygon with texture tiling (sort of a build process).
What about decreasing the frame rate to 30fps. I think it will be good enough for a 2D game.