Apache Ignite: Control data - ignite

I want to ask a question:
How does Apache Ignite distribute data?
How can I control the distribution in Apache Ignite?
For example, I want to distribute more data to some nodes (because they have more memory, and able to save more data), and less data to others nodes
Thank you!!

If you want to do this for one cache you can implement your version of affinity function (https://apacheignite.readme.io/docs/affinity-collocation#section-affinity-function), but this is not recommended because it will not be scalable. If you just want to specify mapping of node to the new cache you can try nodeFilter in CacheConfiguration.

Related

Apache Ignite persist to disk

Is there any easy way in Ignite to persist to disk after the Ignite servers are up and running and filled in with data?
I have seen https://apacheignite.readme.io/docs/distributed-persistent-store#section-usage but it seems you need to supply the XML property at startup of your Ignite topology in order to persist to disk.
There's no easy way I can think of. You will need to start new nodes, with persistent data region, and somehow transfer data to those nodes to newly created persistent caches. The easiest way will be to create them as a part of new cluster.

What are the options to bulk/batch load data into Apache Geode(Gemfire)?

We need to load millions of key/values into Apache Geode and we'd like to know what are some the options available. Our values happen to be in the 256kb range.
There are several options depending on your application requirements/SLAs or whether you need to perform conversion or other transformations, etc.
Out-of-the-box, Apache Geode provides the Cache & Region Snapshot Service. This is useful when you want to migrate data from 1 existing Apache Geode cluster to another, for instance. Not so useful if your data is coming from an external source, like a RDBMS.
Another option is to lazily load the data based on need. This can be accomplished by implementing the CacheLoader interface and registering the CacheLoader with a Region. Obviously, you could create a CacheLoader implementation that intelligently loads a block of data based on some rules/criteria in addition to loading and returning the single value of interests based on the current requests.
A lot of times, users create an external, custom Conversion process or tool to extract, transform and bulk load (ETL) a bunch of data into Apache Geode. This is typical in complex Use Cases or requirements. However, it is highly advisable to use perhaps a framework/tool like...
Spring XD (now Spring Cloud Data Flow on Pivotal's Cloud Foundry (PCF)) is great ETL tool and pipeline for creating stream-based applications. Spring XD / SCDF provides many different options for "sources" and "sinks" (e.g. GemFire Server). In addition to sources & sinks, you can even "tap" the stream to process the data with "Processors". So whether you are doing real-time stream or batch-oriented data operations (e.g. bulk loads), Spring XD is a great option.
I am sure Google might provide other answers on how to perform ETL with a KeyValue store like Apache Geode.
Hope this helps get you going.
Cheers,
John
We have very limited options to load Gemfire regions .
1) Spring batch:
Create Gemfire writer for load data and remove data
Create batch configuration and lod it
2) Apache Spark
https://www.linkedin.com/pulse/fast-data-access-using-gemfire-apache-spark-part-vaquar-khan-/

Can we copy Apache Ignite Cluster to another Ignite cluster?

I want to back up entire Ignite cluster so that back up clutser will be used if the original(active) cluster is down. Is there any approach for this?
If you need two separate clusters with replication across data center, it would be better to look at GridGain solutions that supports Datacenter Replication.
Unfortunately, Ignite does not support DR.
With Apache Ignite you can logically divide you cluster to two zones to have guarantee that every zone contains full copy of data. However, there is no way to choose primary node for partitions manually. See, AffinityFunction and affinityBackupFilter() method of standard implementations.
As answered above, ready made solution is only available in paid version. Open source Apache ignite provides ability to take cluster wide absolute snapshot. You can add a cron job in your ignite cluster to take this snapshot and add another job to copy snapshot data to object storage like S3.
On the other side, you download this data node wise to work directories of respective nodes as per manual restore procedure and start the cluster. It should automatically activate when all baseline nodes are started successfully and your cluster is ready to use.

How to do dynamic scaling using pg_shard

I am doing data base scaling using postgresql.
Currently i am using pg_shard for scaling and able to do sharding and replication. i have tested the example that mentioned in Readme file of pg_shard.
But i need dynamically scale a cluster as new machines are added or old ones are retired.I am using google cloud VM to setup database .So once one VM is filled with data i want to setup new instance with same configuration.
ie,if the current machine size is 4GB and is of out of memory then it should create one more VM with 4GB size and next entries should come there.
I have gone through http://slideplayer.com/slide/4896815/ and after reading this i understood that it is possible to do but the steps are not mentioned anywhere.
How to achieve this using pg_shard?
I got the answer myself.
We can use CitusDB for this.
CitusDB is installed with an extension called "shard_rebalancer", which helps you to move the shards around when new nodes are added to the cluster. For this, you need to follow the installation instructions for CitusDB.
In this documentation, you can find about the related information for the shard rebalancer functions (i.e., rebalance_table_shards and replicate_table_shards)
With simpler words, you must follow the steps:
Add CitusDB node(s) to the cluster
Add the IPs (or host names) to pg_worker_list.conf
Reload the master node configuration, so that the master becomes aware of the new worker node(s)
Run "SELECT rebalance_table_shards('tablename')" on the master node.

Accumulo -- Adding a new node

I'm trying to learn Accumulo. But I have a couple of questions that I couldn't find directly:
First, can we add a new server to an existing Accumulo system without any down time? If yes, the new node will have its share (DB data) arranged by master; right? Since it has fail-recovery, I believe that will be automatic.
Can we define the number of replications or whole data is shared with some fail recovery system by itself? How can I learn the details of replication and data distribution process?
Thanks a lot :)
Yes, you can dynamically add/remove worker nodes at any time. They just need to have the same configuration options available to them so that they can join the cluster (shared secret, zookeeper quorum, etc... basically, the same accumulo-site.xml that you are using).
By default, the "master" process will assign tablets to each "tablet server" processes so that each host will be serving roughly the same amount of data.
Not sure I understand your second question, but Accumulo generally uses HDFS for its backing store, which handles replication and data recovery at the "file" level.