Ignite AffinityKeyMapped and AffinityKeyMapper - ignite

Using Ignite 2.6.0
What I would like to do: Use a class method to compute affinity key value for a cache. In other words for IgniteCache<Key, Value> I want to use Key::someMethod to compute the affinity key.
The default GridCacheDefaultAffinityKeyMapper class does not seem to support using class methods.
So I thought of using CacheConfiguration::setAffinityMapper(AffinityKeyMapper) with a custom class implementing AffinityKeyMapper. But AffinityKeyMapper is marked as deprecated.
If I am understanding things correctly, my two choices are
1. Compute the required affinity at object construction time and use AffinityKeyMapped
2. Ignore the deprecation warning and use CacheConfiguration::setAffinityMapper(AffinityKeyMapper)
Which of these is the right way, or is there a third way?

Ignite stores data in binary format and do not deserialize objects on server side unless you ask explicitly about this in the code (for example, if you run a compute job and get something from a cache). As a matter of fact, in general case there are no key/value classes at all on server nodes, therefore there is no way to invoke a method or use AffinityKeyMapper. That's why it's deprecated.
I would recommend to predefine the affinity key value when you create the key object (i.e. go with option #1).

Related

is there any way to customize ignite built-in cleaning TTL functions

I see that Ignite currently supports the TTL feature to remove unusual keys. Is there any way to customize this TTL feature?
In my case, I have BinaryObjects in IgniteCache, key -> BinaryObject, and those BinaryObjects contain several values, one of them is a timestamp. Could I customize Ignite's built-in cleaning TTL functions somehow so that Ignite can check the timestamp value and decide to remove or keep a key?
Thank you
Yes and no. You can implement your own expiry policy if you like. You just need to create a class that implements ExpiryPolicy. And each row can have a different policy.
However, you'll note that the API does not give access to the record, so you can't have it automatically set the policy based on a column.

Way to call data source only once for multiple keys in apache geode

I have apache geode as inline cache where it is connected to postgres as datasource. Since for getting all keys at once when getall method id invoked from region It calls CacheLoader Sequentially. Is there a way so that i can get all keys and call my datasource at once such as calling in query from Cacheloader.
I don't there's a way of accomplishing this out of the box using a CacheLoader since, as you already verified, the callback is invoked sequentially on every key not found within the Region. You might be able to pre-populate the Region with all those keys you know must be there, though, but keys not found while executing Region.getAll() will still be retrieved sequentially by invoking the configured CacheLoader.

Notifying an instance as down using a ServiceCache in Curator

The documentation for Curator (http://curator.apache.org/curator-x-discovery/index.html) says:
If a particular instance has an I/O error, etc. you should call ServiceProvider.noteError() passing in the instance.
I am using a ServiceCache to get my instances, rather than a ServiceProvider (see Using selection strategies with a cache in Curator).
Where can I find the noteError() method here? I can't find it on the cache object
There is no noteError() on a ServiceCache, however as #Randgalt notes (https://stackoverflow.com/a/57059811/2048051) the best way is to not use a ServiceCache but rather just use ServiceProvider, because in the background that uses a cache anyway, and it has the noteError() method available.
https://issues.apache.org/jira/browse/CURATOR-531 has been raised to make the documentation clearer

How to process invokeAll EntryProcessor from map/set's values in custom order?

For the function:
invokeAll()
It use Map/Set which contains the entry will be processed, I want process the each entry in a custom order, i.e as the same of the key order
in document:
The order that the entries for the keys are processed is undefined. Implementations may choose to process the entries in any order, including concurrently. Furthermore there is no guarantee implementations will use the same EntryProcessor instance to process each entry, as the case may be in a non-local cache topology.
For this line:
Implementations may choose to process the entries in any order, including concurrently
I don't know how to do this, is there any example?
If I use a TreeMap/TreeSet to save the key with order, does the entry will be handled same as its key order in the TreeMap/TreeSet?
By the way, as invoke has a internal lock, does invokeAll will also hold the lock for all the keys in map / set, until the entryprocessor finished?
The documentation you're referring to is, in fact, inherited from javax.cache.Cache::invokeAll. "Implementation" here means not an EntryProcessor but an implementation of the JSR 166 (AKA JCache, AKA javax.cache package) - and Ignite implements it in IgniteCache.
What this documentation means is that specification of the javax.cache.Cache interface allows its implementations to invoke EntryProcessors in any order. Ignite chooses not give any additional details to it, and there is not way to influence the order here.
Also, remember that Ignite is distributed, so the processing of entries in invokeAll is inherently concurrent. If you need strict order, it's probably better to iterate over the keys and use invoke on each key.

Specifying a GeoTools read-only DataSource

Using a call such as:
DataStore dataStore = DataStoreFinder.getDataStore(map);
Is there an entry I can make to the map to make the datastore read-only? The only thing I have seen is the URL to specify the name for the datasource.
I imagine that the reason a map is used to send in arguments is that various data sources require different parameters. I am dealing with shape files right now and have not seen any way to specify it.
Thanks.
A DataStore doesn't have a notion of being read-only or read-write. On the other hand, the classes which access a feature type do; there is a difference between a FeatureSource and a FeatureStore. The former class does not have any write/update functions. A high-level description is here.
By default datastore.getFeatureSource returns its result cast as a FeatureSource (read-only). If you want to have write-access, you have to try and cast the FeatureSource to a FeatureStore. As a note, not all DataStore implementations provide write-access.