infinispan - locking in distributed cache with hot rod, is it possible? - locking

I have to use distributed cache and I would like to use Infinispan 5.3 for that.
I examined the different connection modes and I picked hot rod to implement the client-server communication. I also need to lock a specific key in the cache and later after processing to unlock it (the places for locking and unlocking are in different class in my application...).
I read many documents, articles and forum entries regarding the issue but I haven't found any solution so far. If I interpreted properly what I read then it is not possible to lock the key manually in hot rod. I tried to handle the transactions manually but I am not sure how to do that. Perhaps it is not possible in Infinispan 5.3...?
Or can you tell me a different connection mode (instead of hot rod) that can provide me client-server communication and the locking is solved?
Thanks,
V.

Remote transactions (and locking) via HotRod are not supported in Infinispan 5.3.
See ISPN-375 and ISPN-848.

Related

Syncronize multiple instances of Spring Cache with a Redis lock

I'm building a Spring Boot application that uses Spring Cache with a Redis backing store and needs to synchronize the updates made to the cache.
The caching is not made on the fly, but by an scheduled process that updates the cache periodically.
The algorithm I came up with is:
periodically the instances will check if the Redis cache is older than some predetermined time
if that's the case, the instance will try to acquire a lock on some Redis key
if the instance successfully locks the key, it will then proceed with the update
if some other instance already locked the key, move on
all instances can still read the cache
Everything is more or less already built, all I need is to implement the locking/releasing mechanism.
Spring Cache is using Lettuce to interact with Redis, what is the best way to get an connection to Redis and manage the locking mechanism?
As you may already be aware, Spring's Cache Abstraction provides simple coordination amongst multiple Threads in a single Spring [Boot] application process using the sync attribute on the #Cacheable annotation (see ref doc).
NOTE: Despite the comment ("... use the sync attribute to instruct the underlying cache provider to lock the cache entry while the value is being computed. As a result, only one thread is busy computing the value, while the others are blocked until the entry is updated in the cache.") in the documentation, the locking mechanics is handled by the core framework itself, and in most cases, not the provider. Anyway...
However, this "coordination" is only per-process and will not work for multiple Spring [Boot] application instances, or (OS) JVM processes. In this case, you need some form of distributed locking across your multiple Spring [Boot] application instances to coordinates access to shared cache entries stored in the single Redis server (cluster) shared by your Spring [Boot] application instances.
I am no Redis expert (I am still learning), but I am familiar with similar NoSQL stores (Apache Geode/VMware GemFire, Hazelcast, etc) and distributed locking mechanisms. I see that distributed locking is possible to achieve with Redis as well. In a quick search, I found "Distributed Locking" in Redis, and specifically, "Building a lock in Redis". This is probably the best way to go.
In addition, if you want to make this distributed locking automatically/transparently available through Spring's Cache Abstraction, then you could possibly create a custom AOP Aspect and weave this Aspect together with the framework provided Caching Aspect (Interceptor), being conscious of ordering, as 1 idea.
Alternatively, you could implement wrapper implementations for the Spring Cache and CacheManager SPI interfaces that implement distributed locking on top of the core Redis Cache and CacheManager provider implementations provided by Spring Boot/Spring Data Redis.
Of course, there are multiple ways to go about this. Just tossing out more ideas, but have a look at the distributed locking information in the book.

how to achieve multi tenancy in redis?

Since I am fairly new with redis, I am trying to explore options and see how can I achieve multi tenancy with redis.
I read some documentation on redisLabs official page and looks like redis cluster mode supports multi tenancy out of the box with redis enterprise.
I am wondering if such a solution for multi tenancy is available in sentinel mode as well?
I may be completely confused with the multi tenancy that redis enterprise provides. May be it works in a sentinel mode also but nothing seems very clear to me.
Can someone throw some light on multi tenancy in redis and what mode supports it?
If you are going to use redis-cluster, then only one DB is supported.
Redis Cluster does not support multiple databases like the stand alone version of Redis. There is just database 0 and the SELECT command is not allowed.
If you are not going to use cluster mode, then you may take a look on the message posted by the creator of Redis about multiple databases (years ago)
I understand how this can be useful, but unfortunately I consider
Redis multiple database errors my worst decision in Redis design at
all... without any kind of real gain, it makes the internals a lot
more complex. The reality is that databases don't scale well for a
number of reason, like active expire of keys and VM. If the DB
selection can be performed with a string I can see this feature being
used as a scalable O(1) dictionary layer, that instead it is not.
With DB numbers, with a default of a few DBs, we are communication
better what this feature is and how can be used I think. I hope that
at some point we can drop the multiple DBs support at all, but I think
it is probably too late as there is a number of people relying on this
feature for their work.
Salvatore's message
Redis cluster documentation
What i may suggest is prefixing. We are using this method in a SaaS application and all different data types are prefixed with related customer name. We handle some of the operations on application layer.
If you want to go single instance/multiple database then you need to manage them on your codebase via using select command. There may be some libraries to manage them. One of the critical thing is that;
All databases are still persisted in the same RedisDB / Append Only file.

Highly available and load balanced ActiveMQ cluster

Please be aware that I am a relative newbie to ActiveMQ.
I am currently working with a small cluster of ActiveMQ (version 5.15.x) nodes (< 5). I recently experimented with setting up the configuration to use "Shared File System Master Slave" with kahadb, in order to provide high availability to the cluster.
Having done so and seeing how it operates, I'm now considering whether this configuration provides the level of throughput required for both consumers/producers, as only one broker's ports are available at one time.
My question is basically two part. First, does it make sense to configure the cluster as highly available AND load balanced (through Network of Brokers)? Second, is the above even technically viable, or do I need to review my design consideration to favor one aspect over the other?
I had some discussions with the ActiveMQ maintainers in IRC on this topic a few months back.
It seems that they would recommend using ActiveMQ Artemis instead of ActiveMQ 5.
Artemis has a HA solution:
https://activemq.apache.org/artemis/docs/latest/clusters.html
https://activemq.apache.org/artemis/docs/latest/ha.html
The idea is to use Data Replication to allow for failover, etc:
When using replication, the live and the backup servers do not share the same data directories, all data synchronization is done over the network. Therefore all (persistent) data received by the live server will be duplicated to the backup.
And, I think you want to have at least 3 nodes (or some odd number) to avoid issues with split brain during network partitions.
It seems like Artemis can mostly be used as a drop-in replacement for ActiveMQ; it can still speak the OpenWire protocol, etc.
However, I haven't actually tried this yet, so YMMV.

Does StackExchange.Redis supports MONITOR?

I recently migrated from Booksleeve to StackExchange.Redis.
For monitoring purposes, I need to use the MONITOR command.
In the wiki I read
From the IServer instance, the Server commands are available
But I can't find any method concerning MONITOR in IServer ; After a quick search in the repository, it seems this command is not mappped even if RedisCommand.MONITOR is defined.
So, is the MONITOR command supported by StackExchange.Redis ?
Support for monitor is not provided, for multiple reasons:
invoking monitor is a path with of no return; a monitor connection can never be anything except a monitor connection - it certainly doesn't play nicely with the multiplexer (although I guess a separate connection could be used)
monitor is not something that is generally encouraged - it has impact; and when you do use it, it would be a good idea to run it as close to the server as possible (typically in a terminal to the server itself)
it should typically be used for short durations
But more importantly, perhaps, I simply haven't seen a suitable user-case or had a request for it. If there is some scenario where monitor makes sense, I'm happy to consider adding some kind of support. What is it that you want to do with it here?
Note that caveat on the monitor page you link to:
In this particular case, running a single MONITOR client can reduce the throughput by more than 50%. Running more MONITOR clients will reduce throughput even more.

How to set send-buffer-size and receive-buffer-size in infinispan hotrod client and server

I was planning to use out or process distributed caching solution and I am trying infinispan hot rod protocol for this purpose. It performs quite well compare to other caching solutions but I feel it is taking more in network communication than expected.
We have 1000Mbps ethernet network and round trip time between client and server is around 200ms but infinispan hot rod protocol is taking around 7 seconds in transferring an object of size 30 MB from server to client. I feel that I need to do tcp tuning to reduce this time, can someone please suggest me how can I tune tcp to get best performance? On googling I found that send-buffer-size and receive -buffer-size can help in this case but I don't know how and where to set these properties. Can someone please help me in this regard.
Any help in this regard is highly appreciated.
Thanks,
Abhinav
By default, Hot Rod client and server enable TCP-no-delay, which is good for small objects. For bigger objects, such as your case, you might wanna disable it so that client/server can buffer and then send instead. For the client, when you construct RemoteCacheManager, try passing infinispan.client.hotrod.tcp_no_delay=false, and the server needs a similar configuration option too. How the server is configured depends on your Infinispan versions. If using the latest Infinispan 6.0.0 version, you'll have to go to the standalone.xml file and change the endpoint subsystem configuration so that hotrod-connector has tcp-nodelay attribute set to false. Send/receive buffers only apply when TCP-no-delay are disabled. These are also configurable via similar methods, but I'd only do that if you're not happy with the result once TCP-no-delay has been disabled.