is there any way to customize ignite built-in cleaning TTL functions - ignite

I see that Ignite currently supports the TTL feature to remove unusual keys. Is there any way to customize this TTL feature?
In my case, I have BinaryObjects in IgniteCache, key -> BinaryObject, and those BinaryObjects contain several values, one of them is a timestamp. Could I customize Ignite's built-in cleaning TTL functions somehow so that Ignite can check the timestamp value and decide to remove or keep a key?
Thank you

Yes and no. You can implement your own expiry policy if you like. You just need to create a class that implements ExpiryPolicy. And each row can have a different policy.
However, you'll note that the API does not give access to the record, so you can't have it automatically set the policy based on a column.

Related

Implementing a RMW operation in Redis

I would like to maintain comma separated lists of entries of the following form <ip>:<app> indexed by a an account ID. There would be one such list for each user indexed by their account ID with the number of users in the millions. This is mainly to track which server in a cluster a user using a certain application is connected to.
Since all servers are written in Java, with Redisson I'm currently doing:
RSet<String> set = client.getSet(accountKey);
and then I can modify the set using some typical Java container APIs supported by Redisson. I basically need three types of updates to these comma separated lists:
Client connects to a new application = append
Client reconnects with existing application to new endpoint = modify
Client disconnects = remove
A new connection would require a change to a field like:
1.1.1.1:foo,2.2.2.2:bar -> 1.1.1.1:foo,2.2.2.2:bar,3.3.3.3:baz
A reconnect would require an update like:
1.1.1.1:foo,2.2.2.2:bar -> 3.3.3.3:foo,2.2.2.2:bar
A disconnect would require an update like:
1.1.1.1:foo,2.2.2.2:bar -> 2.2.2.2:bar
As mentioned the fields would be keyed by the account ID of the user.
My question is the following: Without using Redisson how can I implement this "directly" on top of Redis commands? The goal is to allow rewriting certain components in a language different than Java. The cluster handles close to a million requests per second.
I'm actually quite curious how Redisson implements an RSet under the hood and I haven't had time to dig into it. I guess one option would be to use Lua, but I've never used it with Redis. Any ideas how to efficiently implement these operations on top of Redis on a manner that is easily supported by multiple languages, i.e. not relying on a specific library?
Having actually thought about the problem properly, it can be solved directly with a HSET. Where <app> is the field name and the value are the IPs. Keys being user accounts.

Ignite AffinityKeyMapped and AffinityKeyMapper

Using Ignite 2.6.0
What I would like to do: Use a class method to compute affinity key value for a cache. In other words for IgniteCache<Key, Value> I want to use Key::someMethod to compute the affinity key.
The default GridCacheDefaultAffinityKeyMapper class does not seem to support using class methods.
So I thought of using CacheConfiguration::setAffinityMapper(AffinityKeyMapper) with a custom class implementing AffinityKeyMapper. But AffinityKeyMapper is marked as deprecated.
If I am understanding things correctly, my two choices are
1. Compute the required affinity at object construction time and use AffinityKeyMapped
2. Ignore the deprecation warning and use CacheConfiguration::setAffinityMapper(AffinityKeyMapper)
Which of these is the right way, or is there a third way?
Ignite stores data in binary format and do not deserialize objects on server side unless you ask explicitly about this in the code (for example, if you run a compute job and get something from a cache). As a matter of fact, in general case there are no key/value classes at all on server nodes, therefore there is no way to invoke a method or use AffinityKeyMapper. That's why it's deprecated.
I would recommend to predefine the affinity key value when you create the key object (i.e. go with option #1).

ActiveMQ MessageGroupHashBucket - what is cache property needed for?

I'm trying to find a best strategy to deal with ActiveMQ Message Groups support.
ActiveMQ has several strategies (MessageGroupMap implementations).
The one that is confusing me a little is MessageGroupHashBucket.
Specifically, after looking at sources, I don't understand why is the cache property needed there? When assigning consumer id for message group or retrieving consumer id by message group - the array of buckets is used.
It would be great if someone can suggest why.
Thanks in advance,
MessageGroupHashBucket implements MessageGroupMap interface method getGroups() by returning the cache property as a map of all group names and associated consumer Id.
Adding to #piola's answer, it looks like the cache property is used to configure the number of group names that will be inside a bucket. This is a very efficient way to handle a large number of groups. Going by this logic, a configuration of 1024 buckets with the cache size of 64 can handle 65,536 groups.

Default spring.cloud.stream.rabbit.* properties to appy to multiple channels?

With spring cloud stream, you can avoid redundant properties for each individual channel, by specifying "default" properties.
For example, if I have 2 channels bound to the same destination/exchange, I can do:
spring.cloud.stream.default.destination=myExchange
spring.cloud.stream.bindings.myChannel1.group=queue1
spring.cloud.stream.bindings.myChannel2.group=queue2
And queue1 and queue2 will both by bound to myExchange.
That works as documented, and I do it for some properties.
But....I'd like to do the same for RabbitMQ binding properties.
For example, if I want DLQ for all of my consumers/queues, do something like:
spring.cloud.stream.rabbit.default.consumer.auto-bind-dlq=true
spring.cloud.stream.rabbit.default.consumer.dlq-ttl=10000
spring.cloud.stream.rabbit.default.consumer.dlq-dead-letter-exchange=
Otherwise, I have to specify those same 3 lines for every channel.
Is there any way to do this? I've tried several different permutations to no avail.
BTW, I'm on version 1.2.1.RELEASE of spring-cloud-starter-stream-rabbit.
Thanks.
It is supported. Please see https://docs.spring.io/spring-cloud-stream/docs/Elmhurst.RELEASE/reference/htmlsingle/#binding-properties section of the user guide
To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of spring.cloud.stream.default.<property>=<value>
According to Spring Cloud Stream documentation, it is possible since version 2.1.0.RELEASE.
See 9.2 Binding Properties.
When it comes to avoiding repetitions for extended binding properties,
this format should be used
spring.cloud.stream.<binder-type>.default.<producer|consumer>.<property>=<value>
Unfortunately so far I couldn't make it work. Did anyone get it?
It is not supported yet.
See 3.2. RabbitMQ Consumer Properties
The following properties are available for Rabbit consumers only and must be prefixed with spring.cloud.stream.rabbit.bindings..consumer..
including ttl/dlq*

ttl of list is disappear when the list become empty in redis

In Redis I inserted an element into a list by using Lpush and set an expiry. During the program execution, more elements will push-in and pop-out to and from the list. But when the list become empty the settled expiry of the list will loss.
Is there any way to retain the old list even if it is empty??
As a hack I put a dummy object in Redis for persist it's ttl but that is a bad solution.
please help.
No, empty lists are being removed. See the docs where it says: the result will be an empty list (which causes key to be removed)
As an alternative you can use a separate simple key for keeping the expiration. You will have to check on every push and pop if the key has expired or not and to do this in an atomic way you can use a Lua script. I think this separation is better than a dummy object which can be confused with a real value. And your whole logic would be in the Lua script and not in your application.