How many ignite server nodes are required to start if I have two managed server in a cluster and 50+ wars deployed on them? - weblogic

| ignite | 2.7.5 |
|weblogic| 14.1.1.0.0 |
Currently I am using ignite only for caching small amount of data which are frequently used and shared between all the deployments.

Two may be enough (one per server node).
However, Apache Ignite does not have fine-grained access control feature. If you need to separate data access from these WARs, you may need to run a pair of server nodes per WAR, or maybe ensure that they only access Ignite through a common facade which restricts access.

Related

connection to different nodes in infinispan cluster

We have infinispan with 7 nodes in the cluster.
We have a set of clients connecting to three nodes as configured in hotrodclient.properties and another set of clients connecting to remaining nodes in the cluster.
Our objective is to distribute the load on the cluster . Is it ok to do like this ?
The Hot Rod client already performs load balancing by default. Based on the configured intelligence, it performs round-robing or contacts the server which owns the data to be accessed directly.
You have more information on the documentation page (Section 2.3 and 2.3.1).

Does Apache Ignite support WAN replication?

I've been doing some experiments with Apache Ignite and I've started to look into WAN replication. By this I mean there would be 2 (or more) data centres each running an Ignite cluster. There would be some caches that I would like kept in sync between the two data centres.
Does Apache Ignite support this? If so how is this configured as I can't find any mention of this in the documentation.
At the moment Ignite does not support caches spanning multiple clusters(nor cache mirroring). If however you mean there is only one Ignite cluster consisting of nodes located in different data centers(WAN),that would be possible though would most likely be inefficient! since you will have to use the Replicated Mode.
GridGain provides asynchronous WAN replication on top of Ignite as part of their payed solution: https://www.gridgain.com/docs/latest/administrators-guide/data-center-replication/configuring-replication

WSO2 API manager clustering active-active nodes

Wanted to check if we can achieve clustering (active-active) for WSO2 APIM deployed on 2 nodes (all profiles on both nodes)?
You can.
You have to share databases and mount registry between 2 servers.
Also you need to enable clustering between them.
To share synapse configuration files (of APIs), you needs enable deployment synchronizing between 2 servers too. When you configure publishers, it should be configured so that both publishers publish to a single gateway (i.e. one specific node). And dep sync (or something like rSync) should do the synapse file syncing between 2 servers.
Yes, you can. You will need to front the two nodes with a load balancer, enable registry mounting, share the databases etc. You can refer the below document for more details on how to cluster the APIM nodes.
https://docs.wso2.com/display/CLUSTER44x/Clustering+API+Manager+2.0.0

Do I need to run the WebLogic node manager on a single machine that has multiple WebLogic instances?

Forward: I'm using Java 6u45, WebLogic 10.3.6, and Ubuntu Desktop 14.04 64-bit.
I just started as a student assistant at one of my state's IT offices. On my first day I was tasked with testing WebLogic on Ubuntu (Windows isn't cases sensitive, causing later issues because WebLogic is...). I started messing around with clustering, and now my setup is as follows:
1 Ubuntu machine
1 domain
6 servers: Admin server, wls1-4, and wlsmaster (wlsmaster was supposed to be what wls1 and wls2 reported to within the cluster because I set the cluster to be unicast, but that's a secondary question for now).
2 clusters: cluster1 and cluster2. wls1, wls2, and wlsmaster are on cluster1. wls3 and 4 are on cluster2.
Given my setup, do I even need to use node manager because I'm only using one physical machine? Secondary question; if I want to use unicast, how do I set the master? $state uses unicast for what few Weblogic servers we have, so I was told to check that out.
A few things:
No, you don't necessarily have to use a nodemanager, but it will make your life easier. When you log into the weblogic admin console and attempt to start one of your servers e.g. wls1-4, the Admin server will attempt to talk to the node manager to start the servers. Without the nodemanager you will have to start each server individually using the startManagedWebLogic.sh script and if you need to bring servers up and down often it will be very annoying.
With regards to Unicast it is pretty easy to set up (we just leave all the default values alone). Here is the pertinent info from the Oracle Docs:
"Each of the Managed Servers in a WebLogic Server cluster has a name. For unicast clusters, WebLogic Server reads these Managed Server names and then sorts them into an ordered list by alphanumeric name. The first 10 Managed Servers in the list (up to 10 Managed Servers) become the first unicast clustering group. The second set of 10 Managed Servers (if applicable) becomes the second group, and so on until all Managed Servers in the cluster are organized into groups of 10 Managed Servers or less. The first Managed Server for each group becomes the group leader for the other (up to) nine Managed Servers in the group."
So you will want to name your master servers in such a way that they are the first alphanumerically in the cluster. That said, for your use case I doubt you need those master servers as all. Just have 2 clusters, one with wls1-2 and one with wls3-4.

Web App: High Availability / How to prevent a single point of failure?

Can someone explain to me how high-availability ("HA") works for a web application ... because I assume HA means that there exist no single-point-of-failure.
However, even if a load balancer is used- isn't that the single point of failure?
I have found this article on the subject:
http://www.tenereillo.com/GSLBPageOfShame.htm
Basically if you do not require long lasting sticky sessions you can configure your DNS servers to return multiple A records (IP addresses) for your website.
Web browsers are smart enough to try all the addresses until they find one that works.
In simple words high availability can be defined as running a system 24*7 without a downtime even if there are hardware and software failures. In other way a fault tolerance application. This helps ensure uninterrupted use of the application for it’s intended users.
Read more on High Availability Deployment Architecture
It works the following way that you setup two HA Proxy servers with heartbeat, so when one fails (stops responding to queries), it's being removed from the cluster.
Requests from HA Proxy can be forwarded to web servers in round robin fashion, and if one web server fails, HA Proxy servers do not try to contact it until it's alive.
Web servers are storing all dynamic information in database, which is replicated across two MySQL instances.
As you can see, HA Proxy and Cluster MySQL (or simply MySQL replication) as well IP Clustering here is the key.
Sure it is when operated alone. Usual highly available setup includes 2 or more load balancers running in cluster in either active/active or active/passive configuration. To further increase the availability you can have 2 different Internet Service Providers (or geo distributed datacenters) each running a pair of clustered load balancers. Then you configure DNS A record resolving to 2 distinct public IP addresses which guarantees round-robin processing splitting DNS requests evenly (CloudFlare is very fast and reliable at this). There's also possibility to return IP address of datacenter closest to your originating geo location by using something like PowerDNS dnsdist
This is what big players do to make their services highly available.
Please read https://docs.oracle.com/cd/E23824_01/html/821-1453/gkkky.html for more clearity. Actually both load balancer uses same vip(Virtual IP Address. https://techterms.com/definition/vip).
HA architecture is a entire field and multiple books were written on it, so it is hard to answer in a short paragraph.
To sum up the ideal situation, you would be using multiple servers, interconnected to a layer of multiple load balancers. The nodes and LB will be located in a few different data centers, and connected to different network backbone. Ideally the data centers will be located all over the world.
In short, all component will have redundancy, including the load balancers.
For a starting point, see Wikipedia for High Availability Cluster