I have to upgrade an app which is using an old version of hazelcast to one of the more recent versions. There was some hazelcast lock functionality which has since been deprecated and removed altogether from the API. In particular, the old lock functionality worked as such:
Hazecast.getLock(myString);
The getLock function was a static method on Hazelcast. Now it is to be replaced with something like:
hazelcastInstance.getLock(myString);
...where the lock comes from one of the instances in the cluster.
My question is, can I use any one of the instances in the hazelcast cluster to get the lock? And if so, will this lock all instances?
Q1: Yes, you can use any one of the instances in the hazelcast cluster to get the lock (ILock).
You can think of ILock in hazelcast framework as the distributed implementation of java.util.concurrent.locks.Lock. For more details please see
http://docs.hazelcast.org/docs/3.5/javadoc/com/hazelcast/core/ILock.html
Q2: If you lock the critical section using an ILock, then the guarded critical section is guaranteed to be executed by only one thread in the entire cluster at a given point of time. Thus once the lock() method is called by say Thread1 in one the nodes, other threads (in other nodes as well) will wait until the lock is released.
Sample Code :
HazelcastInstance hazelcastInstance = Hazelcast.newHazelcastInstance();
Lock testLock = hazelcastInstance.getLock( "testLock" );
testLock.lock();
try
{
// critical section code.
}
finally
{
testLock.unlock();
}
Related
Project Reactor is awesome, easily I can switch a thread to processing some parts on another thread but I've looked inside to Schedulers.fromExecutorService() method, and this method every time allocates new ExecutorService. So when this method is called then always schedulers are creating and allocated again. I am not sure but I think it potential memory leak...
Mono<String> sometext() {
return Mono
.fromCallable(() -> "" )
.subscribeOn(Schedulers.newParallel("my-custom));
}
I wonder about registering Scheduler as bean, it singleton so only once will be allocated not every time or create him in the constructor. Many of the blogs explaining the threading model in this way.
...
private final Scheduler scheduler = Schedulers.newParallel("my-custom);
..
Mono.fromCallable(() -> "" ).subscribeOn(scheduler)
Schedulers.newParallel() will indeed create a new scheduler with an associated backed threadpool every time you call it - so yes, you're correct, if you're using that method then you want to make sure you store a reference to it somewhere so you can reuse it. Simply providing the same name argument won't just retrieve the new scheduler, it'll just create a different one with the same name.
How you do this is up to you - it can be via a spring bean (as long as it's a singleton and not a prototype bean!), a field, or whatever else fits best in with your use case.
However, before all of this I'd first consider whether you definitely need to create a separate parallel scheduler at all. The Schedulers.parallel() scheduler is a default parallel scheduler that can be used for parallel work out the tin (it doesn't create a new one on each invocation), and unless you need separately configured parallel schedulers for separate services for some reason, best practice is just to use that.
If I push a Runnable to a redisson distributed executor service, what rules am I required to oblige by?
Surely , I can not have free reign, I do not see how that is possible, yet, it is not mention in the docs at all, nor are any rules apparently enforced by the API, like R extends Serializable or similar.
If I pass this runnable:
new Runnable(()-> {
// What can I access here, and have it be recreated in whatever server instance picks it up later for execution?
// newlyCreatedInstanceCreatedJustBeforeThisRunnableWasCreated.isAccissible(); // ?
// newlyComplexInstanceSuchAsADatabaseDriverThatisAccessedHere.isAccissible(); // ?
// transactionalHibernateEntityContainingStaticReferencesToComplexObjects....
// I think you get the point.
// Does Redisson serialize everything within this scope?
// When it is recreated later, surely, I can not have access to those exact objects, unless they run on the same server, right?
// If the server goes does and up, or another server executes this runnable, then what happens?
// What rules do we have to abide by here?
})
Also, what rules do we have to abide by when pushing something to a RQueue, RBlockingDequeu, or Redisson live objects?
It is not clear from the docs.
Also, would be great if a link to a single site documentation site could be provided. The one here requires a lot of clickin and navigation:
https://github.com/redisson/redisson/wiki/Table-of-Content
https://github.com/redisson/redisson/wiki/9.-distributed-services#933-distributed-executor-service-tasks
You can have an access to RedisClient and taskId. Full state of task object will be serialized.
TaskRetry setting applied to each task. If task isn't executed after 5 minutes since the moment of start then it will requeued.
I agree that the documentation is lacking some "under the hood" explanations.
I was able to execute db reads and inserts through the Callable/runnable that was submitted to the remote ExecutorService.
I configured a single Redis on a remote VM, the database and the app running locally on my laptop.
The tasks were executed without any errors.
I have used hibernate-search where I have annotated the domains with #Indexed, #Field and many more.
My project is based on multiple microservices like
Search Service - which upon starting read the data from DB and create the indexes. (code has been mentioned below):
#Transactional(readOnly = true)
public void initializeHibernateSearch() {
try {
FullTextEntityManager fullTextEntityManager = Search.getFullTextEntityManager(centityManager);
fullTextEntityManager.createIndexer().startAndWait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
MicroService 1 - which updates the index when any domain is inserted or updated.
The issue I am facing is that when Search Service is started and creates the indexes, it takes the locks on the index files and never releases the locks.
and when Microservice 1 tries to update the index upon insertion or update, it throws an exception as below
org.apache.lucene.store.LockObtainFailedException: Lock held by another program: /root/data/index/default/Event/write.lock
at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:118)
at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41)
at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45)
at org.apache.lucene.index.IndexWriter.(IndexWriter.java:776)
at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.createNewIndexWriter(IndexWriterHolder.java:126)
at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.getIndexWriter(IndexWriterHolder.java:92)
at org.hibernate.search.backend.impl.lucene.AbstractWorkspaceImpl.getIndexWriter(AbstractWorkspaceImpl.java:117)
at org.hibernate.search.backend.impl.lucene.AbstractWorkspaceImpl.getIndexWriterDelegate(AbstractWorkspaceImpl.java:203)
at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.applyUpdates(LuceneBackendQueueTask.java:81)
at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.run(LuceneBackendQueueTask.java:46)
at org.hibernate.search.backend.impl.lucene.SyncWorkProcessor$Consumer.applyChangesets(SyncWorkProcessor.java:166)
at org.hibernate.search.backend.impl.lucene.SyncWorkProcessor$Consumer.run(SyncWorkProcessor.java:152)
at java.lang.Thread.run(Thread.java:748)
Could you please let me know what is the right approach to use hibernate-search in microservices architecture.
There are several options. My recommendation is the first one, as it best fits the architectural spirit of Micro services.
Each micro service has an independent index: don't share the index directory.
Use the master/slave architecture described in the docs, so to have a single service write to the index - the other services will have to delegate to the single writer. Indexes can be replicated over network based filesystem, or using Infinispan.
Disable exclusive_index_use, a configuration property (also described in the docs). I'm listing this for completeness, but this is generally a bad idea; it's going to be much slower and it's going to be your responsibility that one service busy writing won't timeout another service needing to write.
Using Geode 1.2 and 9.1 Pivotal native client the following code:
IRegion<string, IPdxInstance> r = cache.GetRegion<string, IPdxInstance>("myRegion");
return r[key];
then triggers an AfterCreate event for myRegion. Why does that happen when no data is created, only read?
Same here, never used Native Client. I agreed with what #Urizen suspected - you are calling r[key] from an instance of Geode that doesn't have the entry, so it pulls the data from other instance, which "create" the entry locally.
You have a few options here:
Performing an interest registration for the instance you are initiating the call using registerAllKeys() (doc here). There is a catch here: (might not be applicable for native client), in Java API, you have an option to register interest with an InterestResultPolicy. If you use KEYS_VALUES, you will load all data to local from remote on startup WITHOUT triggering afterCreate callback. If you choose KEYS only or NONE, you will likely have similar problem.
You can check for boolean flag remoteOrigin in EntryEvent. If it is false, it is purely local. In a non-WAN setup, this should be enough to distinguish your local operation from remotely initiated operation (be it a cache syncing or a genuine creation initiated by other cache). Vaguely remembering WAN works a bit different here.
I've never used the Native Client but, at a first glance, it should be expected for the afterCreate event to be invoked on the client side as the entry is actually being created on the local cache. What I mean is that the entry might exists on the server but, internally, the client needs to retrieve it from the server, and then create it locally (thus invoking the afterCreate for the locally installed CacheListener). Makes sense?.
1.Which category of CAP theoram does ignite fall under ?
2.While doing a loadCache using a client on multiple Servers , after the loadCache being called if the client goes down, will the operation complete on the Servers ?(Unable to try it due to some permission restriction)
Ignite guarantees data consistency. In case of cluster is segmented into two parts, they can't be merged back. One of the parts has to be considered invalid and restarted.
Most likely data will not be fully loaded in this case. The loading process should be restarted.
For the first question: the enum of CacheAtomicityMode has two values
TRANSACTIONAL: if you configure this in you CacheConfigure, then your application is CP
ATOMIC: if you configure this in you CacheConfigure, then your application is AP, and ATOMIC is the default value.
For the second question: if your client is embeddedwith your application,locaCache failed for some reason, then CacheLoaderException will be throw. If you want to custome CacheStore,you can extend CacheStoreAdapter and override method.