react-native-maps marker clustering or performance improvement - react-native-maps

I use react-native-maps and custom markers.
The number of markers are over than 1000.
Many article say react-native-maps clustering library such as react-native-maps-super-cluster help to optimize performance.
However I found clustering function make delays in app performance.
So I hope to improve using another options (i.e. marker display according to view change?)
Could you help me?

tracksViewChanges={false}
tracksViewChanges: Sets whether this marker should track view changes. It's recommended to turn it off whenever it's possible to improve custom marker performance.

Related

Choosing game model design

I need help designing a game where characters
have universal actions(sit, jump, etc.) or same across all characters; roughly 50 animations
unique attack patterns(different attacks) roughly 6 animations per character
item usage attacks(same across all characters) roughly 4 animations per item which could scale to 500+
What would be the best way to design this? I use blender for animations. And I just started a week ago.
I’m thinking of using either one model for everything and limiting actions or to create multiple and import those separately. Any help is appreciated!
Edit: also considering optimization since I don’t want lag to incur; making a mmo like game.
There is an initial release (MIT License) of the module GodotAnimationRetargeting that I referenced in comments. Update: There is a GDScript version now.
Usually in Godot you have an animation player with the animations tied to a given model. Which means you would have to add them for all the models. However, this module allows you apply animations from an animation player to another model. You can also apply them partially (e.g. only rotation, or position or scaling of bones).
That should help you have a common set of animation applied to different models.
Being a module it requires to compile Godot using it. See Compiling on the Godot docs.

How to improve performance of keplergl?

Is there a way to improve the performance of keplergl by deactivate some features not required in the use-case?
Ideas:
Remove the side bar (not just hiding but that it's not loaded at all)
Disable object interactivity (hover/click effects) on high zoom levels
Only load objects in the current viewport (+ some space around it)
One simple solution:
use the Grid-based aggregation for higher zoom levels
This improved the performance when displaying the data set linked in the question from "almost frozen" on the highest zoom level to "slightly laggy".
I'm not sure, however, how to subscribe to the view port zoom level changes.

how to add millions of points into MapBox layer

I am new to mapbox and I met a problem and I really want help.
I am creating a population density map of a city. There are 53000+ polygon for this city and I use ArcGis to generate the random points in every polygon which creates 4 millions points totally....and the geojson file is over 600MB, I want to make MBTile from TileMill of mapbox.
I tried to generate the 1/20 points layer which is 200,000 points which can be added to the TileMill. But that is not what I want.
And I tried to add 4 millions points layer to TileMill, it will crash...
How should I reduce the size of the 4 millions points?
Or is there any better way to handle this kind of "millions points" situation?
I will be really appreciated for any suggestions from experienced developers in millions of population density. Thank you very much.
Kind of late answer, but if you need to deal with vector points at that scale then you might want to consider using Mapbox protocol buffers ~ mapbox-gl.
Workflow:
get mapbox studio and create a project.
upload your data into a new project and upload it to the cloud (mapbox) or host on your own
implement a mapbox-gl-js project and bring in your layer of 4 million vector points
drink a cold beer
*** Please note that Mapbox-gl is using Web-gl which is really bleeding edge stuff, if you need to support older browsers then go with tmcw's answer.
And I tried to add 4 millions points layer to TileMill, it will crash
TileMill is designed for this, and will not crash if your data is properly indexed and formatted. The reason why this isn't working usually boils down to "your data isn't indexed". If you want to use a shapefile, use shapeindex to index it: otherwise import your data into PostGIS and make sure the table as a correct index.

What's the fastest force-directed network graph engine for large data sets? [duplicate]

We currently have a dynamically updated network graph with around 1,500 nodes and 2,000 edges. It's ever-growing. Our current layout engine uses Prefuse - the force directed layout in particular - and it takes about 10 minutes with a hefty server to get a nice, stable layout.
I've looked a little GraphViz's sfpd algorithm, but haven't tested it yet...
Are there faster alternatives I should look at?
I don't care about the visual appearance of the nodes and edges - we process that separately - just putting x, y on the nodes.
We do need to be able to tinker with the layout properties for specific parts of the graph, for instance, applying special tighter or looser springs for certain nodes.
Thanks in advance, and please comment if you need more specific information to answer!
EDIT: I'm particularly looking for speed comparisons between the layout engine options. Benchmarks, specific examples, or just personal experience would suffice!
I wrote a JavaScript-based graph drawing library VivaGraph.js.
It calculates layout and renders graph with 2K+ vertices, 8.5K edges in ~10-15 seconds. If you don't need rendering part it should be even faster.
Here is a video demonstrating it in action: WebGL Graph Rendering With VivaGraphJS.
Online demo is available here. WebGL is required to view the demo but is not needed to calculate graphs layouts. The library also works under node.js, thus could be used as a service.
Example of API usage (layout only):
var graph = Viva.Graph.graph(),
layout = Viva.Graph.Layout.forceDirected(graph);
graph.addLink(1, 2);
layout.run(50); // runs 50 iterations of graph layout
// print results:
graph.forEachNode(function(node) { console.log(node.position); })
Hope this helps :)
I would have a look at OGDF, specifically http://www.ogdf.net/doku.php/tech:howto:frcl
I have not used OGDF, but I do know that Fast Multipole Multilevel is a good performant algorithm and when you're dealing with the types of runtimes involved with force directed layout with the number of nodes you want, that matters a lot.
Why, among other reasons, that algorithm is awesome: Fast Multipole method. The fast multipole method is a matrix multiplication approximation which reduces the O() runtime of matrix multiplication for approximation to a small degree. Ideally, you'd have code from something like this: http://mgarland.org/files/papers/layoutgpu.pdf but I can't find it anywhere; maybe a CUDA solution isn't up your alley anyways.
Good luck.
The Gephi Toolkit might be what you need: some layouts are very fast yet with a good quality: http://gephi.org/toolkit/
30 secondes to 2 minutes are enough to layout such a graph, depending on your machine.
You can use the ForAtlas layout, or the Yifan Hu Multilevel layout.
For very large graphs (+50K nodes and 500K links), the OpenOrd layout wil
In a commercial scenario, you might also want to look at the family of yFiles graph layout and visualization libraries.
Even the JavaScript version of it can perform layouts for thousands of nodes and edges using different arrangement styles. The "organic" layout style is an implementation of a force directed layout algorithm similar in nature to the one used in Neo4j's browser application. But there are a lot more layout algorithms available that can give better visualizations for certain types of graph structures and diagrams. Depending on the settings and structure of the problem, some of the algorithms take only seconds, while more complex implementations can also bring your JavaScript engine to its knees. The Java and .net based variants still perform quite a bit better, as of today, but the JavaScript engines are catching up.
You can play with these algorithms and settings in this online demo.
Disclaimer: I work for yWorks, which is the maker of these libraries, but I do not represent my employer on SO.
I would take a look at http://neo4j.org/ its open source which is beneficial in your case so you can customize it to your needs. The github account can be found here.

Optimizing Actionscript performance

I am setting out for a visualization project that will generate 1000+ sprites from dynamic data. The toolkit I am using (Flare) requires some optimization. I am trying to figure out some optimization techniques for Flash. How can I make Flash run fast when there are so many sprites on the stage, or maybe there is an optimization technique that doesn't involve generating so many sprites?
One good way of doing is freeze animations which are not visible to the user. But the complication with this is that, you need to remember the state from which the animation has to resume or refers the animation based on the current state of the whole application. Since you have so many sprites generated, make sure that you group them logically. This would help in easily implement the freezing logic.