Configuring infinispan - infinispan

My application is running on 10 servers and I use infinispan for managing the cache on those 10 servers. Currently infinispan is configured on all the 10 servers. I wish to restrict the infinispan instances to just 4 servers instead of the current 10. The number of servers is not changing and is remaining fixed at 10.
I am also wish to use JGroups, that is a part of infinispan package to replicate the cache data across the 4 infinispan instances.
Can someone help me to understand how it can be done.

You have to setup multicast address on your jgroups xml configuration file (mcast_addr and mcast_port). Make sure your 4 server have the same multicast address and give different address for the other 6.

Related

connection to different nodes in infinispan cluster

We have infinispan with 7 nodes in the cluster.
We have a set of clients connecting to three nodes as configured in hotrodclient.properties and another set of clients connecting to remaining nodes in the cluster.
Our objective is to distribute the load on the cluster . Is it ok to do like this ?
The Hot Rod client already performs load balancing by default. Based on the configured intelligence, it performs round-robing or contacts the server which owns the data to be accessed directly.
You have more information on the documentation page (Section 2.3 and 2.3.1).

Apache ignite connecting to different servers

I am using apache ignite with default configurations. I have two development server A and B where each server has the same code. I have 3 ignite nodes started on each server. 3 ignite nodes on A and 3 on B
I have created a ignite cache " ignite-bridg". Since on one server each node would create a cache and partition the data and these two servers are isolated so nothing will happen.
However I see that both the servers form a cluster and 6 nodes get connected. This is highly problematic for me. I think this is happening because both servers are accidently on same multicast group.
How to resolve this problem. I need to rectify it quickly
By default Ignite uses Multicast IP finder (TcpDiscoveryMulticastIpFinder) for nodes discovery process, in your case you should use Static IP finder (TcpDiscoveryVmIpFinder) instead. By using it you could specify different lists of IP addresses for each server and form two clusters instead of one.
Here is more information regarding Static IP Finder configuration:
https://www.gridgain.com/docs/latest/developers-guide/clustering/tcp-ip-discovery#static-ip-finder

How can I configure Apache Zookeeper with redundancy on only two physical frames?

I would like to have a high-availability/redundant installation of Zookeeper running in my production environment. The problem is that I only have 2 physical frames available, so that rules out configuring a Zookeeper cluster/ensemble since I'd only have redundancy if the frame with the minority of servers goes down. What is the best practice in this situation? Is it possible to have a separate standalone install running on each frame connected to the same set of SOLR nodes or to use one server as primary and one as backup?
Zookeeper requires 3 nodes. In your scenario if you cannot get another machine you can setup multiple zookeeper nodes on the same machine in different directories using different ports.

Do I need to run the WebLogic node manager on a single machine that has multiple WebLogic instances?

Forward: I'm using Java 6u45, WebLogic 10.3.6, and Ubuntu Desktop 14.04 64-bit.
I just started as a student assistant at one of my state's IT offices. On my first day I was tasked with testing WebLogic on Ubuntu (Windows isn't cases sensitive, causing later issues because WebLogic is...). I started messing around with clustering, and now my setup is as follows:
1 Ubuntu machine
1 domain
6 servers: Admin server, wls1-4, and wlsmaster (wlsmaster was supposed to be what wls1 and wls2 reported to within the cluster because I set the cluster to be unicast, but that's a secondary question for now).
2 clusters: cluster1 and cluster2. wls1, wls2, and wlsmaster are on cluster1. wls3 and 4 are on cluster2.
Given my setup, do I even need to use node manager because I'm only using one physical machine? Secondary question; if I want to use unicast, how do I set the master? $state uses unicast for what few Weblogic servers we have, so I was told to check that out.
A few things:
No, you don't necessarily have to use a nodemanager, but it will make your life easier. When you log into the weblogic admin console and attempt to start one of your servers e.g. wls1-4, the Admin server will attempt to talk to the node manager to start the servers. Without the nodemanager you will have to start each server individually using the startManagedWebLogic.sh script and if you need to bring servers up and down often it will be very annoying.
With regards to Unicast it is pretty easy to set up (we just leave all the default values alone). Here is the pertinent info from the Oracle Docs:
"Each of the Managed Servers in a WebLogic Server cluster has a name. For unicast clusters, WebLogic Server reads these Managed Server names and then sorts them into an ordered list by alphanumeric name. The first 10 Managed Servers in the list (up to 10 Managed Servers) become the first unicast clustering group. The second set of 10 Managed Servers (if applicable) becomes the second group, and so on until all Managed Servers in the cluster are organized into groups of 10 Managed Servers or less. The first Managed Server for each group becomes the group leader for the other (up to) nine Managed Servers in the group."
So you will want to name your master servers in such a way that they are the first alphanumerically in the cluster. That said, for your use case I doubt you need those master servers as all. Just have 2 clusters, one with wls1-2 and one with wls3-4.

how to use master/slave configuration in activemq using apache zookeeper?

I'm trying to configure master/slave configuration using apache zookeeper. I have 2 application servers only on which I'am running activemq. as per the tutorial given at
[1]: http://activemq.apache.org/replicated-leveldb-store.html we should have atleast 3 zookeeper servers running. since I have only 2 machines , can I run 2 zookeeper servers on 1 machine and remaining one on another ? also can I run just 2 zookeeper servers and 2 activemq servers respectively on my 2 machines ?
I will answer the zookeper parts of the question.
You can run two zookeeper nodes on a single server by specifying different port numbers. You can find more details at http://zookeeper.apache.org/doc/r3.2.2/zookeeperStarted.html under Running Replicated ZooKeeper header.
Remember to use this for testing purposes only, as running two zookeeper nodes on the same server does not help in failure scenarios.
You can have just 2 zookeeper nodes in an ensemble. This is not recommended as it is less fault tolerant. In this case, failure of one zookeeper node makes the zookeeper cluster unavailable since more than half of the nodes in the ensemble should be alive to service requests.
If you want just POC ActiveMQ, one zookeeper server is enough :
zkAddress="192.168.1.xxx:2181"
You need at least 3 AMQ serveur to valid your HA configuration. Yes, you can create 2 AMQ instances on the same node : http://activemq.apache.org/unix-shell-script.html
bin/activemq create /path/to/brokers/mybroker
Note : don't forget du change port number in activemq.xml and jetty.xml files
Note : when stopping one broker I notice that all stopping.