Clients for multiple clusters - ignite

What would be the best way for an application (Ignite client) to connect to
multiple clusters?
For now, I can think of creating multiple Ignite client instances with
different configuration for the application (within a single Java app).

Your described approach is sound. There aren't any alternatives to having multiple clients, (at least) one per cluster.

Related

About Containers scalability in Micro service architecture

A simple question about scalability. I have been studying about scalability and I think I understand the basic concept behind it. You use an orchestrator like Kubernetes to manage the automatic scalability of a system. So in that way, as a particular microservice gets an increase demand of calls, the orchestrator will create new instances of it, to deal with the requirement of the demand. Now, in our case, we are building a microservice structure similar to the example one at Microsoft's "eShop On Containers":
Now, here each microservice has its own database to manage just like in our application. My question is: When upscaling this system, by creating new instances of a certain microservice, let's say "Ordering microservice" in the example above, wouldn't that create a new set of databases? In the case of our application, we are using SQLite, so each microservice has its own copy of the database. I would asume that in order to be able to upscale such a system would require that each microservice connects to an external SQL Server. But if that was the case, wouldn't that be a bottle neck? I mean, having multiple instances of a microservice to attend more demand of a particular service BUT with all those instances still accessing a single database server?
In the case of our application, we are using SQLite, so each microservice has its own copy of the database.
One of the most important aspects of services that scale-out is that they are stateless - services on Kubernetes should be designed according to the 12-factor principles. This means that service-instances cannot have its own copy of the database, unless it is a cache.
I would asume that in order to be able to upscale such a system would require that each microservice connects to an external SQL Server.
yes, if you want to be able to scale-out, you need to use a database that are outside the instances and shared between the instances.
But if that was the case, wouldn't that be a bottle neck?
This depend very much on how you design your system. Comparing microservices to monoliths; when using a monolith, the whole thing typically used one big database, but with microservices it is easier to use multiple different databases, so it should be much easier to scale-out the database this way.
I mean, having multiple instances of a microservice to attend more demand of a particular service BUT with all those instances still accessing a single database server?
There are many ways to scale a database system as well, e.g. caching read-operations (but be careful). But this is a large topic in itself and depends very much on what and how you do things.

Geode DUnit Inter-VM communication

I am implementing geode dunit based tests.Each VM executes Callable asynchronously.The logic is having several steps , between which the VMs need to be synced up. Its not possible to separate them into several different callable s because some variables need to be persisted between stages.
Currently the VMs are doing sleep after each stage and this way the VMs are synced. However i am looking for another option which would allow execution without sleep ( semaphore based ).
Is there an option to have a shared resource between VMs that would allow to sync up the VMs , or may be some geode based mechanism that would allow such orchestration of VMs?
BR
Yulian Oifa
Geode's internal testing framework does this in several places, actually, I'd suggest having a look at the geode-dunit project for examples, specially at the Blackboard java class.
Cheers.

Can multiple independent applications using Redisson share same clustered Redis?

So I would like to ask if there will be any contention issues due to shared access to the same Redis cluster by multiple separate applications which use Redisson library (each application in turn has several instances of themselves).
Does Redisson library support such use case? Or do I need to configure Redisson in each application, for example add some kind of prefix or app name (as it is possible with Quartz where you can define prefixes for tables used by separate applications having access to the same db and using Quartz independently).
Won't the tasks submitted to ExecutorService in one app be forwarded to completely another application which also uses Redisson and not to another instance of the same application?
I would recommend you to use prefix/suffix in Redisson's object names when you share same Redis setup in cluster mode across multiple independent applications.

Is there a way to fully separate two redis databases for pub/sub usage?

Scenario: Two instances of an application share the same redis instance, but use different databases. The application makes use of the redis pub/sub functions to exchange data between services.
Problem: When application instance A publishes something (on redis database 1), application instance B (running on redis database 2) receives the message.
Expectation: As both instances of the application use a different database, I would expect not only that the keys in redis are hold separately, but pub/sub subscribers aswell.
Question: Can I tell redis to keep pub/sub separate for each database?
No - PubSub is shared across all clients connected to the server, regardless of their currently SELECTed database (shared database/numbered database/keyspace). While you can use different channels and such, real separation is possible only by using two Redis instances.
Note: using shared/numbered databases isn't recommended - always use dedicated Redis instances per app/service/use case
As https://redis.io/docs/manual/pubsub/#database--scoping suggests
If you need scoping of some kind, prefix the channels with the name of
the environment (test, staging, production...).

WebLogic WorkManager clustering/remote jobs

Does WebLoogic WorkManager have the ability to execute jobs on other servers on the cluster to effectively parallelize jobs?
There are two Work Managers - One on the server side that handles thread prioritization/queueing and the CommonJ Work Manager that can be used through the CommonJ API.
Within your application, you can define priorities within the container and also pursue parallel execution on the same server. However, if you are looking to process workload in parallel across multiple servers by having a single application server splitting up its current workload and redistributing it across the cluster, the bulk of the logic will have to be written into your application.
WebLogic does provide other mechanisms to make this easier (For example, you could have a primary node process the workload into units of work and put it on a durable distributed topic that the other servers read from) but it would be easier to use an existing product, such as Terracotta's EhCache or a compute cluster on Oracle's Coherence Grid.