jgroup GOOGLE_PING members not removed - infinispan

I am using GOOGLE_PING discovery protocol in Infinispan but it is not removing the non existing members from the cluster file (say abc.list) on storage. This is the version information:
Infinispan 8.x
Jgroups 3.7.x
Can you please help me with this issue?
Thanks,
Sanjiv

Related

Is there any way we can configure hazelcast in master-slave architecture like redis with Spring boot

Currently hazelcast is using cloud discovery for communication.
So if there are 4 kubernetes pods and each of them is having in-memory hazelcast. whenever hazelcast cache is updated in one of the pod, it gets updated in one of the other pod. but in case both of these pods get downscaled and get terminated, the data which is only in these 2 pods is lost. Can we have something like redis where we can provide server, port of the hazelcast cluster and it will be independent of kubernetes pod
Please check the following Blog Post ("Scale without Data Loss!" section) to read how to scale Hazelcast cluster on Kubernetes to avoid data loss.
Also, you can check the official README of hazelcast/hazelcast-kubernetes plugin. There is a section dedicated to scaling there.

Cluster Hazelcast wso2 APIM 2.0 - Load Balance Sticky

First I would like to thanks a lot for this forum.
I have a doubt about the cluster configuration with Hazelcast and Load Balance.
In the documentation https://docs.wso2.com/display/CLUSTER420/Clustering+the+Gateway
in section load balance configuration appear:
upstream wso2.am.com {
**sticky cookie JSESSIONID;**
server xxx.xxx.xxx.xx4:9763;
server xxx.xxx.xxx.xx5:9763;
}
Why to use sticky if the cluster already make the session control?
My understanding is wrong?
Thanks a lot.
In APIM, hazelcast clustering is used for cache invalidation among nodes, inter-node communication for dep-sync etc, but not for session replication. Therefore you need sticky sessions.

Weblogic cache replication with clusters

Is it possible to implement a cache in weblogic (10.3.5.0) which is accessible on every instance of a cluster ?
Does weblogic API offers some API with RMI who offers this possibility ?
Is there a framework like ehcache who offers this possibility ?
Oracle Coherence can handle this situation and it comes bundled with WebLogic 10.3.5.
http://docs.oracle.com/cd/E21764_01/apirefs.1111/e13952/taskhelp/coherence/CreateCoherenceServers.html
"Coherence servers (also known as Coherence data nodes) are stand-alone cache servers, dedicated JVM instances responsible for maintaining and managing cached data."

Why do we need 2 nodes for opsceneter

Is there any merit in setting up 2 nodes for Opscenter with its own storage on a virtual environment?
I am thinking 1 node is good but for H/A does having a second node help?
Thanks,
Vik
OpsCenter supports automatic failover in an active/passive configuration, but does not support an active/active configuration today. You can read more about this functionality and how to best shared OpsCenter settings storage between the two instances: http://docs.datastax.com/en/opscenter/5.2/opsc/configure/configFailover.html

IBM Worklight 6.2. Analytics JNDI properties in WAS ND

About Worklight 6.2 Analytics.
https://www-01.ibm.com/support/knowledgecenter/api/content/SSZH4A_6.2.0/com.ibm.worklight.monitor.doc/monitor/t_setting_up_production_cluster.html
There are several JNDI properties to configure but It is not explained how to configure them in WAS ND and in which scope must be configured (if this has sense)
For example the worklight.properties are configured as application properties during the application installation.
How are configured the analytics JNDI properties on WAS?
And also in which scope should them be configured, this is also struggling me. For example it says that properties like "analytics/shards" or "analytics/replicas_per_shard" must be configured in the first node, but for me these properties should be properties configured at cluster level, not at node level.
Also WAS ND topology is something completely dynamic and flexible, what happens if I remove that "first" node?
Ok, now I understand that when in the Worklight Analytics documentation talk about cluster it is not talking about WAS Cluster but about Elasticsearch cluster.
Taking into account this, configuring a cluster for Analytics does not mean to install the analytics.war in a WAS cluster, it means that you will install analytics.war file in a number of WAS servers (not WAS clusters, not WAS nodes) and with the ElasticSearch properties you will configure the ElasticSearch cluster.
Is this correct?
The specific answer to my question is that the value of the properties are set during the detailed installation of the analytics.war file as it is done with the Application Project WAR file, worklightadmin.war or worklightconsole.war.
It is only needed to set those properties if you are configuring Analytics in more than one server.