ChronicleMap Recovery with multi process application - chronicle-map

We are evaluating ChronicleMap and our application runs cluster mode with nodes ranging from 5 to 45. The plan is to have the ChronicleMap persisted in shared NFS folder so that all the nodes can read/write.
There are more likely chance that individual nodes could go down for various reasons in the middle of a read/write operation with this said. I have some questions
If node-1 goes down during a write operation, can another healthy node-2 in the cluster still continue to read/write to the files?
Lets say we implement some logic to detect a server crash and call the .recoverPersistedTo() on restart. Will this cause any issues while other healthy nodes in the cluster are reading/writing to the files? The reason I ask this question is that the document says
“You must ensure that no other process is accessing the Chronicle Map
store when calling .recoverPersistedTo()”
I have read that using .recoverPersistedTo() in place is createPersistedTo() is not a good practice, but what are the downsides?

First of all, we (Chronicle) don't support putting Chronicle Map files on NFS (as we use memory mapping and NFS is known to cause problems with it). Additionally, trying to use recovery on NFS will cause data corruption as there's no adequate file locking on NFS, and recovery tries to lock the file to prevent simultaneous recovery by multiple processes. In general, open source Chronicle Map is supposed to be used by multiple processes on the same host.
The solution to your problem is commercial Map Enterprise which supports map replication between nodes, please contact sales#chronicle.software for details.

Related

Syncronize multiple instances of Spring Cache with a Redis lock

I'm building a Spring Boot application that uses Spring Cache with a Redis backing store and needs to synchronize the updates made to the cache.
The caching is not made on the fly, but by an scheduled process that updates the cache periodically.
The algorithm I came up with is:
periodically the instances will check if the Redis cache is older than some predetermined time
if that's the case, the instance will try to acquire a lock on some Redis key
if the instance successfully locks the key, it will then proceed with the update
if some other instance already locked the key, move on
all instances can still read the cache
Everything is more or less already built, all I need is to implement the locking/releasing mechanism.
Spring Cache is using Lettuce to interact with Redis, what is the best way to get an connection to Redis and manage the locking mechanism?
As you may already be aware, Spring's Cache Abstraction provides simple coordination amongst multiple Threads in a single Spring [Boot] application process using the sync attribute on the #Cacheable annotation (see ref doc).
NOTE: Despite the comment ("... use the sync attribute to instruct the underlying cache provider to lock the cache entry while the value is being computed. As a result, only one thread is busy computing the value, while the others are blocked until the entry is updated in the cache.") in the documentation, the locking mechanics is handled by the core framework itself, and in most cases, not the provider. Anyway...
However, this "coordination" is only per-process and will not work for multiple Spring [Boot] application instances, or (OS) JVM processes. In this case, you need some form of distributed locking across your multiple Spring [Boot] application instances to coordinates access to shared cache entries stored in the single Redis server (cluster) shared by your Spring [Boot] application instances.
I am no Redis expert (I am still learning), but I am familiar with similar NoSQL stores (Apache Geode/VMware GemFire, Hazelcast, etc) and distributed locking mechanisms. I see that distributed locking is possible to achieve with Redis as well. In a quick search, I found "Distributed Locking" in Redis, and specifically, "Building a lock in Redis". This is probably the best way to go.
In addition, if you want to make this distributed locking automatically/transparently available through Spring's Cache Abstraction, then you could possibly create a custom AOP Aspect and weave this Aspect together with the framework provided Caching Aspect (Interceptor), being conscious of ordering, as 1 idea.
Alternatively, you could implement wrapper implementations for the Spring Cache and CacheManager SPI interfaces that implement distributed locking on top of the core Redis Cache and CacheManager provider implementations provided by Spring Boot/Spring Data Redis.
Of course, there are multiple ways to go about this. Just tossing out more ideas, but have a look at the distributed locking information in the book.

Ceph Object Gateway: what is the best backup strategy?

I have a Ceph cluster managed by Rook with a single RGW store over it. We are trying to figure out the best backup strategy for this store. We are considering the following options: using rclone to backup object via an S3 interface, using s3fs-fuse (haven’t tested it yet but s3fs-fuse is known to be not reliable enough), and using NFS-Ganesha to reexport the RGW store as an NFS share.
We are going to have quite a lot of RGW users and quite a lot of buckets, so all three solutions do not scale well for us. Another possibility is to perform snapshots of RADOS pools backing the RGW store and to backup these snapshots, but the RTO will be much higher in that case. Another problem with snapshots is that it does not seem possible to perform them consistently across all RGW-backing pools. We never delete objects from the RGW store, so this problem does not seem to be that big if we start snapshotting from the metadata pool - all the data it refers to will remain in place even if we create a snapshot on the data pool a bit later. It won’t be super consistent but it should not be broken either. It’s not entirely clear how to restore single objects in a timely manner using this snapshotting scheme (to be honest, it’s not entirely clear how to restore using this scheme at all), but it seems to be worth trying.
What other options do we have? Am I missing something?
We're planning to implement Ceph in 2021.
We don't expect a large number of users and buckets, initially.
While waiting for https://tracker.ceph.com/projects/ceph/wiki/Rgw_-_Snapshots, I successfully tested this solution to address the Object Store protection by taking advantage of multisite configuration + sync policy (https://docs.ceph.com/en/latest/radosgw/multisite-sync-policy/) in the "Octopus" version.
Assuming you have all zones in the Prod site Zone Sync'd to the DRS,
create a Zone in the DRS, e.g. "backupZone", not Zone Sync'd from
or to any of the other Prod or DRS zones;
the endpoints for this backupZone are in 2 or more DRS cluster
nodes;
using (https://rclone.org/s3/) write a bash script: for each the
"bucket"s in the DRS zones, create a version enabled "bucket"-p in the backupZone
and schedule sync, e.g. twice a day, from "bucket" to "bucket"-p;
protect the access to the backupZone endpoints so that no ordinary
user (or integration) can access them, only accessible from the other nodes in the
cluster (obviously) and the server running the rclone-based script;
when there is a failure, just recover all the objects from the *-p
buckets, once again using rclone, to the original buckets or to
filesystem.
This protects from the following failures:
Infra:
Bucket or pool failure;
Object pervasive corruption;
Loss of a site
Human error:
Deletion of versions or objects;
Removal of buckets
Elimination of entire Pools
Notes:
Only the latest version of each object is sync'd to the protected
(*-p) bucket, but if the script runs several times you have the
latest states of the objects through time;
when an object is deleted in the prod bucket, rnode just flags the
object with the DeleteMarker upon sync
this does not scale!! As the number of buckets increases, the time to
sync becomes untenable

How to setup a Akka.NET cluster when I do not really need persistence?

I have a fairly simple Akka.NET system that tracks in-memory state, but contains only derived data. So any actor can on startup load its up-to-date state from a backend database and then start receiving messages and keep their state from there. So I can just let actors fail and restart the process whenever I want. It will rebuild itself.
But... I would like to run across multiple nodes (mostly for the memory requirements) and I'd like to increase/decrease the number of nodes according to demand. Also for releasing a new version without downtime.
What would be the most lightweight (in terms of Persistence) setup of clustering to achieve this? Can you run Clustering without Persistence?
This not a single question, so let me answer them one by one:
So I can just let actors fail and restart the process whenever I want - yes, but keep in mind, that hard reset of the process is a lot more expensive than graceful shutdown. In distributed systems if your node is going down, it's better for it to communicate that to the rest of the nodes before, than requiring them to detect the dead one - this is a part of node failure detection and can take some time (even sub minute).
I'd like to increase/decrease the number of nodes according to demand - this is a standard behavior of the cluster. In case of Akka.NET depending on which feature set are you going to use, you may sometimes need to specify an upper bound of the cluster size.
Also for releasing a new version without downtime. - most of the cluster features can be scoped to a set of particular nodes using so called roles. Each node can have it's set of roles, that can be used what services it provides and detect if other nodes have required capabilities. For that reason you can use roles for things like versioning.
Can you run Clustering without Persistence? - yes, and this is a default configuration (in Akka, cluster nodes don't need to use any form of persistent backend to work).

Zookeeper vs In-memory-data-grid vs Redis

I've found different zookeeper definitions across multiple resources. Maybe some of them are taken out of context, but look at them pls:
A canonical example of Zookeeper usage is distributed-memory computation...
ZooKeeper is an open source Apache™ project that provides a centralized infrastructure and services that enable synchronization across a cluster.
Apache ZooKeeper is an open source file application program interface (API) that allows distributed processes in large systems to synchronize with each other so that all clients making requests receive consistent data.
I've worked with Redis and Hazelcast, that would be easier for me to understand Zookeeper by comparing it with them.
Could you please compare Zookeeper with in-memory-data-grids and Redis?
If distributed-memory computation, how does zookeeper differ from in-memory-data-grids?
If synchronization across cluster, than how does it differs from all other in-memory storages? The same in-memory-data-grids also provide cluster-wide locks. Redis also has some kind of transactions.
If it's only about in-memory consistent data, than there are other alternatives. Imdg allow you to achieve the same, don't they?
https://zookeeper.apache.org/doc/current/zookeeperOver.html
By default, Zookeeper replicates all your data to every node and lets clients watch the data for changes. Changes are sent very quickly (within a bounded amount of time) to clients. You can also create "ephemeral nodes", which are deleted within a specified time if a client disconnects. ZooKeeper is highly optimized for reads, while writes are very slow (since they generally are sent to every client as soon as the write takes place). Finally, the maximum size of a "file" (znode) in Zookeeper is 1MB, but typically they'll be single strings.
Taken together, this means that zookeeper is not meant to store for much data, and definitely not a cache. Instead, it's for managing heartbeats/knowing what servers are online, storing/updating configuration, and possibly message passing (though if you have large #s of messages or high throughput demands, something like RabbitMQ will be much better for this task).
Basically, ZooKeeper (and Curator, which is built on it) helps in handling the mechanics of clustering -- heartbeats, distributing updates/configuration, distributed locks, etc.
It's not really comparable to Redis, but for the specific questions...
It doesn't support any computation and for most data sets, won't be able to store the data with any performance.
It's replicated to all nodes in the cluster (there's nothing like Redis clustering where the data can be distributed). All messages are processed atomically in full and are sequenced, so there's no real transactions. It can be USED to implement cluster-wide locks for your services (it's very good at that in fact), and tehre are a lot of locking primitives on the znodes themselves to control which nodes access them.
Sure, but ZooKeeper fills a niche. It's a tool for making a distributed applications play nice with multiple instances, not for storing/sharing large amounts of data. Compared to using an IMDG for this purpose, Zookeeper will be faster, manages heartbeats and synchronization in a predictable way (with a lot of APIs for making this part easy), and has a "push" paradigm instead of "pull" so nodes are notified very quickly of changes.
The quotation from the linked question...
A canonical example of Zookeeper usage is distributed-memory computation
... is, IMO, a bit misleading. You would use it to orchestrate the computation, not provide the data. For example, let's say you had to process rows 1-100 of a table. You might put 10 ZK nodes up, with names like "1-10", "11-20", "21-30", etc. Client applications would be notified of this change automatically by ZK, and the first one would grab "1-10" and set an ephemeral node clients/192.168.77.66/processing/rows_1_10
The next application would see this and go for the next group to process. The actual data to compute would be stored elsewhere (ie Redis, SQL database, etc). If the node failed partway through the computation, another node could see this (after 30-60 seconds) and pick up the job again.
I'd say the canonical example of ZooKeeper is leader election, though. Let's say you have 3 nodes -- one is master and the other 2 are slaves. If the master goes down, a slave node must become the new leader. This type of thing is perfect for ZK.
Consistency Guarantees
ZooKeeper is a high performance, scalable service. Both reads and write operations are designed to be fast, though reads are faster than writes. The reason for this is that in the case of reads, ZooKeeper can serve older data, which in turn is due to ZooKeeper's consistency guarantees:
Sequential Consistency
Updates from a client will be applied in the order that they were sent.
Atomicity
Updates either succeed or fail -- there are no partial results.
Single System Image
A client will see the same view of the service regardless of the server that it connects to.
Reliability
Once an update has been applied, it will persist from that time forward until a client overwrites the update. This guarantee has two corollaries:
If a client gets a successful return code, the update will have been applied. On some failures (communication errors, timeouts, etc) the client will not know if the update has applied or not. We take steps to minimize the failures, but the only guarantee is only present with successful return codes. (This is called the monotonicity condition in Paxos.)
Any updates that are seen by the client, through a read request or successful update, will never be rolled back when recovering from server failures.
Timeliness
The clients view of the system is guaranteed to be up-to-date within a certain time bound. (On the order of tens of seconds.) Either system changes will be seen by a client within this bound, or the client will detect a service outage.

Redis active-active replication

I am using redis version 2.8.3. I want to build a redis cluster. But in this cluster there should be multiple master. This means I need multiple nodes that has write access and applying ability to all other nodes.
I could build a cluster with a master and multiple slaves. I just configured slaves redis.conf files and added that ;
slaveof myMasterIp myMasterPort
Thats all. Than I try to write something into db via master. It is replicated to all slaves and I really like it.
But when I try to write via a slave, it told me that slaves have no right to write. After that I just set read-only status of slave in redis.conf file to false. Hence, I could write something into db.
But I realize that, it is not replicated to my master replication so it is not replicated to all other slave neigther.
This means I could'not build an active-active cluster.
I tried to find something whether redis has active-active cluster capability. But I could not find exact answer about it.
Is it available to build active-active cluster with redis?
If it is, How can I do it ?
Thank you!
Redis v2.8.3 does not support multi-master setups. The real question, however, is why do you want to set one up? Put differently, what challenge/problem are you trying to solve?
It looks like the challenge you're trying to solve is how to reduce the network load (more on that below) by eliminating over-the-net reads. Since Redis isn't multi-master (yet), the only way to do it is by setting up each app server with a master and a slave (to the other master) - i.e. grand total of 4 Redis instances (and twice the RAM).
The simple scenario is when each app updates only a mutually-exclusive subset of the database's keys. In that scenario this kind of setup may actually be beneficial (at least in the short term). If, however, both apps can touch all keys or if even just one key is "shared" for writes between the apps, then you'll need to bake locking/conflict resolution/etc... logic into your apps to consolidate local master and slave differences (and that may be a bit of an overkill). In either case, however, you'll end up with too many (i.e. more than 1) Redises, which means more admin effort at the very least.
Also note that by colocating app and database on the same server you're setting yourself for near-certain scalability failure. What will happen when you need more compute resources for your apps or Redis? How will you add yet another app server to the mix?
Which brings me back to the actual problem you are trying to solve - network load. Why exactly is that an issue? Are your apps so throughput-heavy or is the network so thin that you are willing to go to such lengths? Or maybe latency is the issue that you want to resolve? Be the case as it may be, I recommended that you consider a time-proven design instead, namely separating Redis from the apps and putting it on its own resources. True, network will hit you in the face and you'll have to work around/with it (which is what everybody else does). On the other hand, you'll have more flexibility and control over your much simpler setup and that, in my book, is a huge gain.
Redis Enterprise has had this feature for quite a while, but if you are looking for an open source solution KeyDB is a fork with Active Active support (called Active Replica).
Setting it up is just a little more work than standard replication:
Both servers must have "active-replica yes" in their respective configuration files
On server B execute the command "replicaof [A address] [A port]"
Server B will drop its database and load server A's dataset
On server A execute the command "replicaof [B address] [B port]"
Server A will drop its database and load server B's dataset (including the data it just transferred in the prior step)
Both servers will now propagate writes to each other. You can test this by writing to a key on Server A and ensuring it is visible on B and vice versa.
https://github.com/JohnSully/KeyDB/wiki/KeyDB-(Redis-Fork):-Active-Replica-Support