Geode/Gemfire Distributed Locking in Server-Client Configuration - gemfire

I am following this doc:
http://gemfire.docs.pivotal.io/docs-gemfire/latest/developing/distributed_regions/locking_in_global_regions.html
to create a region with Global Scope to use distributed locking.
Cache.xml:
<client-cache>
<pool>…definition…</pool>
…
<!--region-attributes For Lock region-->
<region-attributes id="GZ_GLOBAL_REGION_LOCK_ATTRIBUTES" scope="global" pool-name="Zero"/>
…
</client-cache>
Code after GemFireCache created from gemfire.properties and cache.xml:
private Region<String, Object> getOrCreateLockRegion(GemFireCache gemfireCache) {
Region<String, Object> region = gemfireCache.getRegion(lockRegionName);
if (region == null) {
if(!isUsingClientCache) {
region = createRegionFactory((Cache)gemfireCache, String.class, Object.class, lockRegionAttributesID).create(lockRegionName);
} else {
region = createClientRegionFactory((ClientCache) gemfireCache, String.class, Object.class, lockRegionAttributesID).create(lockRegionName);
}
}
return region;
}
protected <K, V> RegionFactory<K, V> createRegionFactory(Cache gemfireCache, Class<K> keyClass, Class<V> valueClass, String regionAttributeRefID) {
return gemfireCache
.<K, V>createRegionFactory(regionAttributeRefID)
.setKeyConstraint(keyClass)
.setValueConstraint(valueClass);
}
protected <K, V> ClientRegionFactory<K, V> createClientRegionFactory(ClientCache gemfireCache, Class<K> keyClass, Class<V> valueClass, String regionAttributeRefID) {
return gemfireCache
.<K, V>createClientRegionFactory(regionAttributeRefID)
.setKeyConstraint(keyClass)
.setValueConstraint(valueClass);
}
I suppose this will give me a region with Scope.Global, so that I can call region.getDistributedLock(“entrykey”); and then have the lock to coordinate between instances.
However, when I called getDistributedLock, I got a IllegalStateException: only supported for GLOBAL scope, not LOCAL
And I found out that the constructor of ClientRegionFactoryImpl force scope to Local no matter what configured in the region-attributes, and I don’t have API to overwrite it.
This line: https://github.com/apache/incubator-geode/blob/develop/geode-core/src/main/java/org/apache/geode/cache/client/internal/ClientRegionFactoryImpl.java#L85
So the question is, am I supposed to use Distributed Lock from Client if I am using client – server DS configuration? If not, what should I do to make clients lock each other to synchronize when necessary?

The DistributedLock and RegionDistributedLock APIs on the Region class are only available within the Server. In order to use these Locks from a Client, you would have to write Functions that you deploy to the Servers. The Client would then tell the Server to execute the Function where it can manipulate the Region as well as the DistributedLock and RegionDistributedLock APIs. More info on the FunctionService can be found at:
http://geode.apache.org/docs/guide/developing/function_exec/chapter_overview.html.

Related

Spring Mono<User> as constructor param - how to "cache" object

I'm drawing a blank on how to do this in project reactor with Spring Boot:
class BakerUserDetails(val bakerUser: Mono<BakerUser>): UserDetails {
override fun getPassword(): String {
TODO("Not yet implemented")
// return ???.password
}
override fun getUsername(): String {
TODO("Not yet implemented")
// return ???.username
}
}
How do I make this work? Do I just put bakerUser.block().password and bakerUser.block().username and all, or is there a better way to implement these methods?
Currently, I'm doing something like this but it seems strange:
private var _user: BakerUser? = null
private var user: BakerUser? = null
get() {
if(_user == null){
_user = bakerUser.block()
}
return _user
}
override fun getAuthorities(): MutableCollection<out GrantedAuthority> {
return mutableSetOf(SimpleGrantedAuthority("USER"))
}
override fun getPassword(): String {
return user!!.password!!
}
im not well versed at Kotlin, but i can tell you that you should not pass in a Monoto the UserDetails object.
A Mono<T> is sort of like a future/promise. Which means that there is nothing in it. So if you want something out of it, you either block which means we wait, until there is something in it, or we subscribe, which basically means we wait async until there is something in it. Which can be bad. Think of it like starting a job on the side. What happens if you start a job and you quit the program, well the job would not be executed.
Or you do something threaded, and the program returns/exits, well main thread dies, all threads die, and nothing happend.
We usually in the reactive world talk about Publishers and Consumers. So a Flux/Mono is a Publisher and you then declare a pipelinefor what to happen when something is resolved. And to kick off the process the consumerneeds to subscribe to the producer.
Usually in a server world, this means that the webpage, that does the request, is the consumer and it subscribes to the server which in this case is the publisher.
So what im getting at, is that you, should almost never subscribe in your application, unless, your application is the one that starts the consumption. For instance you have a cron job in your server that consumes another server etc.
lets look at your problem:
You have not posted your code so im going to do some guesswork here, but im guessing you are getting a user from a database.
public Mono<BakerUserDetails> loadUserByUsername(String username) {
Mono<user> user = userRepository.findByUsername(username);
// Here we declare our pipline, flatMap will map one object to another async
Mono<BakerUserDetails> bakerUser = user.flatMap(user -> Mono.just(new BakerUserDetails(user));
return bakerUser;
}
i wrote this without a compiler from the top of my head.
So dont pass in the Mono<T> do your transformations using different operators like map or flatMap etc. And dont subscribe in your application unless your server is the final consumer.

Axonframework, how to use MessageDispatchInterceptor with reactive repository

I have read the set-based consistency validation blog and I want to validate through a dispatch interceptor. I follow the example, but I use reactive repository and it doesn't really work for me. I have tried both block and not block. with block it throws error, but without block it doesn't execute anything. here is my code.
class SubnetCommandInterceptor : MessageDispatchInterceptor<CommandMessage<*>> {
#Autowired
private lateinit var privateNetworkRepository: PrivateNetworkRepository
override fun handle(messages: List<CommandMessage<*>?>): BiFunction<Int, CommandMessage<*>, CommandMessage<*>> {
return BiFunction<Int, CommandMessage<*>, CommandMessage<*>> { index: Int?, command: CommandMessage<*> ->
if (CreateSubnetCommand::class.simpleName == (command.payloadType.simpleName)){
val interceptCommand = command.payload as CreateSubnetCommand
privateNetworkRepository
.findById(interceptCommand.privateNetworkId)
// ..some validation logic here ex.
// .filter { network -> network.isSubnetOverlap() }
.switchIfEmpty(Mono.error(IllegalArgumentException("Requested subnet is overlap with the previous subnet.")))
// .block() also doesn't work here it throws error
// block()/blockFirst()/blockLast() are blocking, which is not supported in thread reactor-
}
command
}
}
}
Subscribing to a reactive repository inside a message dispatcher is not really recommended and might lead to weird behavior as underling ThreadLocal (used by Axox) is not adapted to be used in reactive programing
Instead, check out Axon's Reactive Extension and reactive interceptors section.
For example what you might do:
reactiveCommandGateway.registerDispatchInterceptor(
cmdMono -> cmdMono.flatMap(cmd->privateNetworkRepository
.findById(cmd.privateNetworkId))
.switchIfEmpty(
Mono.error(IllegalArgumentException("Requested subnet is overlap with the previous subnet."))
.then(cmdMono)));

Kotlin Flow in repository pattern

I would like to use a Flow as a return type for all functions in my repository. For ex:
suspend fun create(item:T): Flow<Result<T>>
This function should call 2 data sources: remote(to save data on the server) and local(to save returned data from the server locally). The question is how I can implement this scenario:
try to save data with RemoteDataSource
if 1. fails - try it N times with M timeout
if data has finally returned from the server - same them locally with LocalDataSource
return flow with locally saved data
RemoteDataSource and LocalDataSource both have fun create with the same signature:
suspend fun create(item:T): Flow<Result<T>>
So they both return flow of data. If you have any ideas about how to implement it, I will be grateful.
------ Update #1 ------
a part of a possible solution:
suspend fun create(item:T): Flow<T> {
// save item remotely
return remoteDataSource.create(item)
// todo: call retry if fails
// save to local a merge two flows in one
.flatMapConcat { remoteData ->
localDataSource.create(remoteData)
}
.map {
// other mapping
}
}
Is it a working idea?
I think you have the right idea but you are trying to do everything at once.
What I found works best (and easily) is to have:
an exposed flow of data coming from your local datasource (easy with Room)
one or more exposed suspend functions like create or refresh that operate on the remote data source and save to the local one (if there is no error)
For ex I have a repository that fetches vehicles in my project (the isCurrent info is only local and isLeft/isRight is because I use Either but any error handling applies):
class VehicleRepositoryImpl(
private val localDataSource: LocalVehiclesDataSource,
private val remoteDataSource: RemoteVehiclesDataSource
) : VehicleRepository {
override val vehiclesFlow = localDataSource.vehicleListFlow
override val currentVehicleFlow = localDataSource.currentVehicleFLow
override suspend fun refresh() {
remoteDataSource.getVehicles()
.fold(
ifLeft = { /* handle errors, retry, ... */ },
ifRight = { reset(it) }
)
}
private suspend fun reset(vehicles: List<VehicleEntity>) {
val current = currentVehicleFlow.first()
localDataSource.reset(vehicles)
if (current != null) localDataSource.setCurrentVehicle(current)
}
override suspend fun setCurrentVehicle(vehicle: VehicleEntity) =
localDataSource.setCurrentVehicle(vehicle)
override suspend fun clear() = localDataSource.clear()
}
Hope this helps and you can adapt it to your case :)

Reactive programming - running jobs in a cluster

I need to run some jobs in a cluster, only one at a time.
Because my team uses Hazelcast, I ended up with a solution based on
Hazelcast ILock implementation. For the purpose of the question, I am going to make a generalisation about it. Let's suppose we have the following interfaces (that could be easily implemented e.g. by Hazelcast or Reddison (Redis)):
public interface MyDistributedLock {
boolean lock();
void unlock();
boolean isLockedByCurrentThread();
}
public interface MyLockDistributedFactory {
MyDistributedLock getLock(String name);
}
And lock method waiting if lock cannot be acquired:
private Mono<Void> lock(String name, Publisher<?> publisher, MyLockDistributedFactory myLockFactory) {
// important to release lock on the same thread as
// it was aquired
Scheduler scheduler = Schedulers.newSingle(name.toLowerCase());
return Mono.defer(() -> Mono.just(myLockFactory.getLock(name)))
publishOn(scheduler)
.doOnNext(MyDistributedLock::lock)
.doOnNext(lock -> LOGGER.info("Process acquired lock for resource {}", name))
.flatMapMany(lock -> Flux.from(publisher))
.publishOn(scheduler)
.doFinally(signalType -> {
MyDistributedLock lock = myLockFactory.getLock(name);
if (signalType == SignalType.CANCEL) {
// cancel ignores publishOn
scheduler.schedule(() -> {
lock.unlock();
LOGGER.info("Process released lock for resource {} due to signal type {}", name, signalType);
});
} else if (lock.isLockedByCurrentThread()) {
lock.unlock();
LOGGER.info("Process released lock for resource {} due to signal type {}", name, signalType);
}
})
.then();
}
And example of some job
private Mono<Void> someJobRunEveryOneHourOnEveryNodeInCluster() {
MyLockDistributedFactory hazelcast = ...;
return lock("some-job", Flux.just(1,2), hazelcast)
.repeatWhen(afterOneHour());
}
I wonder whether this is a good approach of using Project reactor (and correct implementation) or it should be done in a different way. Please advice.
it is a correct approach when using Reactor, because you took care of offsetting the blocking portion into a dedicated Scheduler/Thread.
But I'd say mutually exclusive code like this is not a very good fit for reactive programming in general: you lose one of the key benefits of doing more with less threads, you risk blocking other parts of the application should you forget to publishOn a dedicated thread, etc...

Glassfish - JEE6 - Use of Interceptor to measure performance

For measuring execution time of methods, I've seen suggestions to use
public class PerformanceInterceptor {
#AroundInvoke
Object measureTime(InvocationContext ctx) throws Exception {
long beforeTime = System.currentTimeMillis();
Object obj = null;
try {
obj = ctx.proceed();
return obj;
}
finally {
time = System.currentTimeMillis() - beforeTime;
// Log time
}
}
Then put
#Interceptors(PerformanceInterceptor.class)
before whatever method you want measured.
Anyway I tried this and it seems to work fine.
I also added a
public static long countCalls = 0;
to the PerformanceInterceptor class and a
countCalls++;
to the measureTime() which also seems to work o.k.
With my newby hat on, I will ask if my use of the countCalls is o.k. i.e
that Glassfish/JEE6 is o.k. with me using static variables in a Java class that is
used as an Interceptor.... in particular with regard to thread safety. I know that
normally you are supposed to synchronize setting of class variables in Java, but I
don't know what the case is with JEE6/Glassfish. Any thoughts ?
There is not any additional thread safety provided by container in this case. Each bean instance does have its own instance of interceptor. As a consequence multiple thread can access static countCalls same time.
That's why you have to guard both reads and writes to it as usual. Other possibility is to use AtomicLong:
private static final AtomicLong callCount = new AtomicLong();
private long getCallCount() {
return callCount.get();
}
private void increaseCountCall() {
callCount.getAndIncrement();
}
As expected, these solutions will work only as long as all of the instances are in same JVM, for cluster shared storage is needed.