Apache ignite connecting to different servers - ignite

I am using apache ignite with default configurations. I have two development server A and B where each server has the same code. I have 3 ignite nodes started on each server. 3 ignite nodes on A and 3 on B
I have created a ignite cache " ignite-bridg". Since on one server each node would create a cache and partition the data and these two servers are isolated so nothing will happen.
However I see that both the servers form a cluster and 6 nodes get connected. This is highly problematic for me. I think this is happening because both servers are accidently on same multicast group.
How to resolve this problem. I need to rectify it quickly

By default Ignite uses Multicast IP finder (TcpDiscoveryMulticastIpFinder) for nodes discovery process, in your case you should use Static IP finder (TcpDiscoveryVmIpFinder) instead. By using it you could specify different lists of IP addresses for each server and form two clusters instead of one.
Here is more information regarding Static IP Finder configuration:
https://www.gridgain.com/docs/latest/developers-guide/clustering/tcp-ip-discovery#static-ip-finder

Related

Redis Cluster with Lettuce does not update IP list after nodes reboot

I have a Redis Cluster (3 leaders and 3 followers), when I restart all cluster nodes I would like the application to automatically identify that an IP exchange has happened.
In the application I'm using spring applying the following settings:
spring.redis.cluster.nodes: redis:6379
spring.lettuce.cluster.refresh.adaptive: true
It's as if the application was caching the old ip addresses, I need to somehow get this list of nodes updated, I'm connecting to a dns.
"Refresh adaptive" setting in my case is misspelled, lacking the term "redis".
Correct setting is: spring.redis.lettuce.cluster.refresh.adaptive

Not able to create a cluster in Apache Geode

I tried to create a cluster in Apache Geode by providing the hostname and ip address of the remote system in the gemfire.properties file. Somehow, I am not able to create a cluster.
Can anybody please help with steps to create a cluster (including multi-site).
Thank you
It's not clear from the description if you just want to create a simple GemFire cluster or multiple clusters connected through the Geode WAN replication mechanism...
That said, to start a local Geode cluster you can go trough Apache Geode in 15 Minutes or Less, it's a quick introduction that shows you how to use gfsh to start a locator and some servers, create a region, monitor the system using PULSE, etc.
To setup WAN replication, on the other hand, you can go through Configuring a Multi-site (WAN) System, the most important thing to note about this configuration is that your locators need to know about the locators on the remote system, so you need to make sure that the property remote-locators is correctly configured. Once the locators can talk to each other over the WAN, they will share the connection information with the local servers and these, in turn, will be able to communicate with the servers on the remote clusters.
Hope this helps.
Cheers.

Redis Sentinel - Local IPs / Virtual IPs conflicts

I'm having an issue trying to implement Redis Sentinel...
I set up two servers B and P, on two different geo sites, acting as Master and Replica, respectively.
For geo sites to reach each other, vIPs are used, whereas nodes of a same site use local IPs.
When I add a Sentinel instance on top of Server B, master_link goes down a few minutes later.
Browsing the logs, I discovered Sentinel is actually fixing Replica settings with Master local IP, which cannot be addressed from Server P's site.
Is there a way for Sentinel to set local IPs when on the same site, and vIPs when on another site from the node it's setting ?
BTW, I know 1 Sentinel is not enough, I plan on adding more once this issue's resolved.
Thanks for your return.

Configuring infinispan

My application is running on 10 servers and I use infinispan for managing the cache on those 10 servers. Currently infinispan is configured on all the 10 servers. I wish to restrict the infinispan instances to just 4 servers instead of the current 10. The number of servers is not changing and is remaining fixed at 10.
I am also wish to use JGroups, that is a part of infinispan package to replicate the cache data across the 4 infinispan instances.
Can someone help me to understand how it can be done.
You have to setup multicast address on your jgroups xml configuration file (mcast_addr and mcast_port). Make sure your 4 server have the same multicast address and give different address for the other 6.

Do I need to run the WebLogic node manager on a single machine that has multiple WebLogic instances?

Forward: I'm using Java 6u45, WebLogic 10.3.6, and Ubuntu Desktop 14.04 64-bit.
I just started as a student assistant at one of my state's IT offices. On my first day I was tasked with testing WebLogic on Ubuntu (Windows isn't cases sensitive, causing later issues because WebLogic is...). I started messing around with clustering, and now my setup is as follows:
1 Ubuntu machine
1 domain
6 servers: Admin server, wls1-4, and wlsmaster (wlsmaster was supposed to be what wls1 and wls2 reported to within the cluster because I set the cluster to be unicast, but that's a secondary question for now).
2 clusters: cluster1 and cluster2. wls1, wls2, and wlsmaster are on cluster1. wls3 and 4 are on cluster2.
Given my setup, do I even need to use node manager because I'm only using one physical machine? Secondary question; if I want to use unicast, how do I set the master? $state uses unicast for what few Weblogic servers we have, so I was told to check that out.
A few things:
No, you don't necessarily have to use a nodemanager, but it will make your life easier. When you log into the weblogic admin console and attempt to start one of your servers e.g. wls1-4, the Admin server will attempt to talk to the node manager to start the servers. Without the nodemanager you will have to start each server individually using the startManagedWebLogic.sh script and if you need to bring servers up and down often it will be very annoying.
With regards to Unicast it is pretty easy to set up (we just leave all the default values alone). Here is the pertinent info from the Oracle Docs:
"Each of the Managed Servers in a WebLogic Server cluster has a name. For unicast clusters, WebLogic Server reads these Managed Server names and then sorts them into an ordered list by alphanumeric name. The first 10 Managed Servers in the list (up to 10 Managed Servers) become the first unicast clustering group. The second set of 10 Managed Servers (if applicable) becomes the second group, and so on until all Managed Servers in the cluster are organized into groups of 10 Managed Servers or less. The first Managed Server for each group becomes the group leader for the other (up to) nine Managed Servers in the group."
So you will want to name your master servers in such a way that they are the first alphanumerically in the cluster. That said, for your use case I doubt you need those master servers as all. Just have 2 clusters, one with wls1-2 and one with wls3-4.