Geode DUnit Inter-VM communication - locking

I am implementing geode dunit based tests.Each VM executes Callable asynchronously.The logic is having several steps , between which the VMs need to be synced up. Its not possible to separate them into several different callable s because some variables need to be persisted between stages.
Currently the VMs are doing sleep after each stage and this way the VMs are synced. However i am looking for another option which would allow execution without sleep ( semaphore based ).
Is there an option to have a shared resource between VMs that would allow to sync up the VMs , or may be some geode based mechanism that would allow such orchestration of VMs?
BR
Yulian Oifa

Geode's internal testing framework does this in several places, actually, I'd suggest having a look at the geode-dunit project for examples, specially at the Blackboard java class.
Cheers.

Related

Any downside of running multiple hosted service within a .Net Core Windows Service?

Currently, we have a .Net Framework 4.7 based windows service that we install through MSI built using Wix. But during install, we register multiple windows services for the same exe with difference being in the arguments passed to each service. It would look like Myapp.exe -instance 1, Myapp.exe -instance 2..and so on. Each instance uses a different configuration based on the instance number and will poll different IBM MQ and process messages. We install around 14 such instances.
Now that we are looking to migrate to .Net Core, we are wondering, if its worth changing this deployment model and instead move to using multiple instances of hosted services. With this, we will simply register the hosted service multiple times but with different constructor parameter. So I am trying to understand, what could be potential downside of this approach. Till now, I could think of coupe of them.
Since these runs as independent processes, we currently have ability to stop/start specific instance of windows service. So we will potentially lose that ability.
Since these runs as independent processes, we can easily identify memory spike in a specific instance of windows service. So for troubleshooting, we can just focus on specific instance. With single executable, we lose this ability as well.
Apart from these, what other potential pitfalls that I may come across with this approach?
Also for the above 2 points, is there any workaround when using multiple hosted services?
I'm not sure specifically about Windows Services but I had the same question for microservices. I think in general, there isn't much either way but some things to consider:
All services go down if you need to deploy a new one (but if they are all the same, you are more likely to update all of them at the same time)
Coordinating between them (if necessary) might be easier (locks, transactions etc) if they are together but likewise might allow you to do things that break encapsulation because you can
They would all start and stop at the same time in a single service, if you want to control them separately, you will either need an external enable-disable mechanism or separate windows services.
If you will ever need to separate them e.g. onto separate machines, you will have to do the risky work of separating them later.
It sounds like they are largely identical just targetting different data so there aren't any things I can think of that would be a problem.

Is there a way to allocate different instances of the same process to different camunda instances(workstations)?

Is there a way to allocate different instances of the same process to different camunda instances(workstations)?
Since there will be more requests for my camunda process, than my pc can handle, I'm looking for a way to allocate some of those requests to another camunda instance, once mine can't handle any more.
CAMUNDA nodes do not keep state (apart from user session info). In a cluster they synchronize via the database. To distribute the load you can simply start additional environments, which connect to the same database.
Please also see: https://docs.camunda.org/manual/latest/introduction/architecture/
You can configure a homogeneous cluster, where all nodes are the same, or a heterogeneous cluster, where the deployment on different nodes differs. Different node types can be deployment-aware (true), which means they can be configured to only handle workloads intended for them and for which they have the necessary deployment (classes, libs, etc) and system resources.
Homogeneous Setup
Heterogeneous Setup
The job executor is key here. Please see: https://docs.camunda.org/manual/latest/user-guide/process-engine/the-job-executor/

.Net Core Hosted Services in a Load Balanced Environment

We are developing a Web API using .Net Core. To perform background tasks we have used Hosted Services.
System has been hosted in AWS Beantalk Environment with the Load Balancer. So based on the load Beanstalk creates/remove new instances of the system.
Our problem is,
Since background services also runs inside the API, When load balancer increases the instances, number of background services also get increased and there is a possibility to execute same task multiple times. Ideally there should be only one instance of background services.
One way to tackle this is to stop executing background services when in a load balanced environment and have a dedicated non-load balanced single instance environment for background services only.
That is a bit ugly solution. So,
1) Is there a better solution for this?
2) Is there a way to identify the primary instance while in a load balanced environment? If so I can conditionally register Hosted services.
Any help is really appreciated.
Thanks
I am facing the same scenario and thinking of a way to implement a custom service architecture that can run normally on all of the instance but to take advantage of pub/sub broker and distributed memory service so those small services will contact each other and coordinate what's to be done. It's complicated to develop yes but a very robust solution IMO.
You'll "have to" use a distributed "lock" system. You'll have to use, for example, a distributed memory cache who put a lock when someone (a node of your cluster) is working on background. If another node is trying to do the same job, he'll be locked by the first lock if the work isn't done yet.
What i mean, if all your nodes doesn't have a "sync handler" you can't handle this kind of situation. It could be SQL app lock, distributed memory cache or other things ..
There is something called Mutex but even that won't control this in multi-instance environment. However, there are ways to control it to some level (may be even 100%). One way would be to keep a tracker in the database. e.g. if the job has to run daily, before starting your job in the background service you might wanna query the database if there is any entry for today, if not then you will insert an entry and start your job.

Can multiple independent applications using Redisson share same clustered Redis?

So I would like to ask if there will be any contention issues due to shared access to the same Redis cluster by multiple separate applications which use Redisson library (each application in turn has several instances of themselves).
Does Redisson library support such use case? Or do I need to configure Redisson in each application, for example add some kind of prefix or app name (as it is possible with Quartz where you can define prefixes for tables used by separate applications having access to the same db and using Quartz independently).
Won't the tasks submitted to ExecutorService in one app be forwarded to completely another application which also uses Redisson and not to another instance of the same application?
I would recommend you to use prefix/suffix in Redisson's object names when you share same Redis setup in cluster mode across multiple independent applications.

WebLogic WorkManager clustering/remote jobs

Does WebLoogic WorkManager have the ability to execute jobs on other servers on the cluster to effectively parallelize jobs?
There are two Work Managers - One on the server side that handles thread prioritization/queueing and the CommonJ Work Manager that can be used through the CommonJ API.
Within your application, you can define priorities within the container and also pursue parallel execution on the same server. However, if you are looking to process workload in parallel across multiple servers by having a single application server splitting up its current workload and redistributing it across the cluster, the bulk of the logic will have to be written into your application.
WebLogic does provide other mechanisms to make this easier (For example, you could have a primary node process the workload into units of work and put it on a durable distributed topic that the other servers read from) but it would be easier to use an existing product, such as Terracotta's EhCache or a compute cluster on Oracle's Coherence Grid.