We have a map configuration, where at a certain zoom level we have to plot N number of features, ranging in hundreds to thousands, but on Internt Explorer, if the feature count is more than 2500, than there are memory issues.
So, is it possible to plot the features on the map progressively in a manner, so that memory issues can be resolved?
Here are a few options that can improve performance and/or memory usage:
Use clustering Cluster Example
Use an Image Vector Layer Image Vector Example
Use 'postcompose' to draw directly to canvas to avoid the overhead of features
Don't use a spatial index on your vector source (useSpatialIndex=false)
Render features on server using a map server like GeoServer
Related
I have a collection of network points stored as Nodes and Edges for use in networkx, but would like to impliment a more advanced visualzation tool into the pyqtgraph widget so that I can use it in a GUI designed in pyqt5.
The problem with utilizing matplotlib to visualize the nx network is that the colors are not displayed, gravity cannot be manipulated, ect...
Figure 1
The optimal case would be if cytoscape or Gephi had a backend to allow for integration into these sorts of GUIs, since I am able to get things spaced out for visualization very well with these after constructing the network in nx.
Both figures were of the same data, the only difference is that figure 2 was visualized using Gephi, allowing for the nodes to be repelled a little more, making them readable.
Is there a way to:
Adjust repulsion in networkx when NOT manually dictating node placement?
Visualize a network in a pyqtGraph widget? (preferably interactive)
I want users to be able to generate high quality images for publication from their cytoscape.js graphs (in fact, this is the major use case of my application). Since there's no export from cytoscape.js to a vector format, my documentation tells users to just zoom to a high level of resolution before exporting to png, and then my code passes in the current zoom level as the scale parameter to cy.png(), along with full:true. However, at sufficiently high values for scale, no image gets generated. This maximum scale value seems to be different for different graphs, so I assume it's based on some maximum physical dimensions. How can I determine what the maximum scale value should be for any given graph, so my code won't exceed it?
Also, if I have deleted or moved elements such that the graph takes up less physical space than it used to, cy.png() seems to include that blank area in the resulting image even though it no longer contains any elements -- is there any way to avoid that?
By default, cy.png() and cy.jpg() will export the current viewport to an image. To export the fitted graph, use cy.png({ full: true }) or cy.jpg({ full: true }).
The browser enforces exported overall file size limits. This depends on the OS and the browser vendor. Cytoscape.js can not influence nor can it calculate this limit. For large resolutions/scales, the quality difference between PNG and JPG is negligible. So, JPG tends to be a better option for large exports.
Aside: While bitmap formats scale with resolution, vector formats like (SVG) scale with the number of graph elements. For large graphs, it may be impossible to export SVG (even if such a feature were implemented).
I am currently using OpenCV's built-in patch-based histogram back projection (cv::calcBackProjectPatch()) to identify regions of a target material in an image. With an image resolution of 640 x 480 and a window size of 10 x 10, processing a single image requires ~1200 ms. While the results are great, this far too slow for a real-time application (which should have a processing time of no more than ~100 ms).
I have already tried reducing the window size and switching from CV_COMP_CORREL to CV_COMP_INTERSECT to speed up the processing, but have not seen any appreciable speed up. This may be explained by the OpenCV documentation (emphasis mine):
Each new image is measured and then
converted into an image image array
over a chosen ROI. Histograms are
taken from this image image in an area
covered by a “patch” with an anchor at
center as shown in the picture below.
The histogram is normalized using the
parameter norm_factor so that it may
be compared with hist. The calculated
histogram is compared to the model
histogram; hist uses The function
cvCompareHist() with the comparison
method=method). The resulting
output is placed at the location
corresponding to the patch anchor in
the probability image dst. This
process is repeated as the patch is
slid over the ROI. Iterative histogram
update by subtracting trailing pixels
covered by the patch and adding newly
covered pixels to the histogram can
save a lot of operations, though it is
not implemented yet.
This leaves me with a few questions:
Is there another library that supports iterative histogram updates?
How significant of a speed-up should I expect from using an iterative update?
Are there any other techniques for speeding up this type of operation?
As mentioned in OpenCV Integral Histograms will definitely improve speed.
Please take a look at a sample implementation in the following link
http://smsoftdev-solutions.blogspot.com/2009/08/integral-histogram-for-fast-calculation.html
We currently have a dynamically updated network graph with around 1,500 nodes and 2,000 edges. It's ever-growing. Our current layout engine uses Prefuse - the force directed layout in particular - and it takes about 10 minutes with a hefty server to get a nice, stable layout.
I've looked a little GraphViz's sfpd algorithm, but haven't tested it yet...
Are there faster alternatives I should look at?
I don't care about the visual appearance of the nodes and edges - we process that separately - just putting x, y on the nodes.
We do need to be able to tinker with the layout properties for specific parts of the graph, for instance, applying special tighter or looser springs for certain nodes.
Thanks in advance, and please comment if you need more specific information to answer!
EDIT: I'm particularly looking for speed comparisons between the layout engine options. Benchmarks, specific examples, or just personal experience would suffice!
I wrote a JavaScript-based graph drawing library VivaGraph.js.
It calculates layout and renders graph with 2K+ vertices, 8.5K edges in ~10-15 seconds. If you don't need rendering part it should be even faster.
Here is a video demonstrating it in action: WebGL Graph Rendering With VivaGraphJS.
Online demo is available here. WebGL is required to view the demo but is not needed to calculate graphs layouts. The library also works under node.js, thus could be used as a service.
Example of API usage (layout only):
var graph = Viva.Graph.graph(),
layout = Viva.Graph.Layout.forceDirected(graph);
graph.addLink(1, 2);
layout.run(50); // runs 50 iterations of graph layout
// print results:
graph.forEachNode(function(node) { console.log(node.position); })
Hope this helps :)
I would have a look at OGDF, specifically http://www.ogdf.net/doku.php/tech:howto:frcl
I have not used OGDF, but I do know that Fast Multipole Multilevel is a good performant algorithm and when you're dealing with the types of runtimes involved with force directed layout with the number of nodes you want, that matters a lot.
Why, among other reasons, that algorithm is awesome: Fast Multipole method. The fast multipole method is a matrix multiplication approximation which reduces the O() runtime of matrix multiplication for approximation to a small degree. Ideally, you'd have code from something like this: http://mgarland.org/files/papers/layoutgpu.pdf but I can't find it anywhere; maybe a CUDA solution isn't up your alley anyways.
Good luck.
The Gephi Toolkit might be what you need: some layouts are very fast yet with a good quality: http://gephi.org/toolkit/
30 secondes to 2 minutes are enough to layout such a graph, depending on your machine.
You can use the ForAtlas layout, or the Yifan Hu Multilevel layout.
For very large graphs (+50K nodes and 500K links), the OpenOrd layout wil
In a commercial scenario, you might also want to look at the family of yFiles graph layout and visualization libraries.
Even the JavaScript version of it can perform layouts for thousands of nodes and edges using different arrangement styles. The "organic" layout style is an implementation of a force directed layout algorithm similar in nature to the one used in Neo4j's browser application. But there are a lot more layout algorithms available that can give better visualizations for certain types of graph structures and diagrams. Depending on the settings and structure of the problem, some of the algorithms take only seconds, while more complex implementations can also bring your JavaScript engine to its knees. The Java and .net based variants still perform quite a bit better, as of today, but the JavaScript engines are catching up.
You can play with these algorithms and settings in this online demo.
Disclaimer: I work for yWorks, which is the maker of these libraries, but I do not represent my employer on SO.
I would take a look at http://neo4j.org/ its open source which is beneficial in your case so you can customize it to your needs. The github account can be found here.
The question
Is there a known benchmark or theoretical substantiation on the optimal (rendering speed wise) image size?
A little background
The problem is as follows: I have a collection of very large images, thousands of pixels wide in each dimension. These should be presented to the user and manipulated somehow. In order to improve performance of my web app, I need to slice them. And here is where my question arises: what should be the dimensions of these slices?
You can only find out by testing, every browser will have different performance parameters and your user base may have anything from a mobile phone to a 16-core Xeon desktop. The larger determining factor may actually be the network performance in loading new tiles which is completely dependent upon how you are hosting and who your users are.
As the others already said, you can save a lot of research by duplicating the sizes already used by similar projects: Google Maps, Bing Maps, any other mapping system, not forgetting some of the gigapixel projects like gigapan.
It's hard to give a definitive dimension, but I successfully used 256x256 tiles.
This is also the size used by Microsoft Deep Zoom technology.
In absence of any other suggestions, I'd just use whatever Google Maps is using. I'd imagine they would have done such tests.