Infinspan console shows only one node for clustered servers in the cache node view - infinispan

We are working with infinspan version 9.4.8 in a domain mode with cluster of two hosts servers with two nodes.
In the statistics of the cluster view we can see that both nodes get hits but when we look at the view of the cache nodes for a distributed cache we can see only one node in the nodes view
In console of infinspan 8 we used to have the two nodes in the cache nodes view but after upgrading to version 9 it is not the case
Could you please advise if it is bug in the console for version 9.4.8 or something is missed in the configuration

This is a bug which has just been fixed and will be included in the upcoming 9.4.18.Final release. The issue is tracked by ISPN-11265.
In the future please utilise the Infinispan JIRA directly if you suspect a bug.

Related

clustre apache ignite (2.9.1-1) ubuntu 18.04

for testing, I build a clustre apache ignite (2.9.1-1) when starting first node, everything is ok, when starting second nodes, I get an error (Failed to add node to topology because it has the same hash code for partitioned affinity as one of existing nodes) since I am not an expert in apache-ignite, I wanted to clarify how I can fix this error
You need to specify different consistentId for every node in the cluster.
In this case, it is possible that you are starting both nodes with myIgniteNode01.

AWS EKS node group migration stopped sending logs to Kibana

I encounter a problem while using EKS with fluent bit and I will be grateful for the community help, first I'll describe the cluster.
We are running EKS cluster in a VPC that had an unmanaged node group.
The EKS cluster network configuration is marked as "public and private" and
using fluent-bit with Elasticsearch service we show logs in Kibana.
We've decided that we want to move to managed node group in that cluster and therefore migrated from the unmanaged node group to a managed node group successfully.
Since our migration we cannot see any logs in Kibana, when getting the logs manually from the fluent bit pods there are no errors.
I toggled debug level logs for fluent bit to get better look at it.
I can see that fluent-bit gathers all the log files and then I saw that we get messages:
[debug] [out_es] HTTP Status=403 URI=/_bulk
[debug] [retry] re-using retry for task_id=63 attemps=3
[debug] [sched] retry=0x7ff56260a8e8 63 in 321 seconds
Furthermore, we have managed node group in other EKS clusters but we did not migrate to them they were created with managed node group.
The created managed node group were created from the same template we have from working managed node group with the only difference is the compute power.
The template has nothing special in it except auto scale.
I compared between the node group IAM role of working node group logs and my non working node group and the Roles seems to be the same.
As far for my fluent bit configuration I have the same configuration in few EKS clusters and it works so I don't think that the root cause but if anyone thinks something else I can add it if requested.
Someone had that kind of problem? why node group migration could cause such issue?
Thanks in advance!
Lesson learned, always look at the access policy of the resource you are having issue with, maybe it does not match your node group role

Ignite thin Client unstable behavior

I am newbie to ignite and trying to play around with the example https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/client/ClientPutGetExample.java
i first tried the example with one server node and executed the client everything work fine.
then i started a second node with the following config
IgniteClient igniteClient = Ignition.startClient(new ClientConfiguration().setAddresses("127.0.0.1:10800","127.0.0.1:10801" )))
with CacheMode.REPLICATED;
i re-run the code it work fine, then i kept the same config and i shut down
one of the nodes
then i re-run the code the result is unstable sometimes it gives me Ignite cluster is unavailable sometimes it gives me an empty cache
Thin client put-get example started.
Created cache [put-get-example].
Loaded [null] from the cache.
1-as per the documentation ignite thin client is supposed to failover one of the
running nodes.
2- why the cache is note replicated?
is there something that i am missing here
thank you for your help
This looks like IGNITE-11599 - Thin Client will not failover properly if some of addresses were not up when it started.
It is fixed recently but did not get in any released versions. I'm afraid you will have to work around it by doing manual failovers.

Yarn local-dirs - per node setup

I've had a series of devops issues from time to time on our production cluster. Every now and then, / partition gets overwhelmed on couple of nodes. Long story short, it turns out that these nodes had 1 instead of 2 data drives. This would not be an issue if we don't have a following setup on our cluster:
<property>
<name>yarn.nodemanager.local-dirs</name>
<value>/data1/hadoop/yarn/local,/data2/hadoop/yarn/local</value>
</property>
Some devops or whoever, noticing there are no /data2 partitions on the smaller nodes, came up with the idea to simply go with / partition. Since / is 16GB, some of the more data-demanding jobs quickly fill the thing.
Now, my question: does yarn support per-node setup of yarn.nodemanager.local-dirs?
I resolved the problem by removing /data2/hadoop/yarn/local from the story, but it doesn't feel perfect.
We're using HDP 2.6.4.
Thx!
YARN allows this since each Node Manager would read it's local yarn-site.xml. However, I don't know how you would do this in Ambari.

Solr issue: ClusterState says we are the leader, but locally we don't think so

So today we run into a disturbing solr issue.
After a restart of the whole cluster one of the shard stop being able to index/store documents.
We had no hint about the issue until we started indexing (querying the server looks fine).
The error is:
2014-05-19 18:36:20,707 ERROR o.a.s.u.p.DistributedUpdateProcessor [qtp406017988-19] ClusterState says we are the leader, but locally we don't think so
2014-05-19 18:36:20,709 ERROR o.a.s.c.SolrException [qtp406017988-19] org.apache.solr.common.SolrException: ClusterState says we are the leader (http://x.x.x.x:7070/solr/shard3_replica1), but locally we don't think so. Request came from null
at org.apache.solr.update.processor.DistributedUpdateProcessor.doDefensiveChecks(DistributedUpdateProcessor.java:503)
at org.apache.solr.update.processor.DistributedUpdateProcessor.setupRequest(DistributedUpdateProcessor.java:267)
at org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:550)
at org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.processUpdate(JsonLoader.java:126)
at org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.load(JsonLoader.java:101)
at org.apache.solr.handler.loader.JsonLoader.load(JsonLoader.java:65)
at org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:92)
at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1916)
We run Solr 4.7 in Cluster mode (5 shards) on jetty.
Each shard run on a different host with one zookeeper server.
I checked the zookeeper log and I cannot see anything there.
The only difference is that in the /overseer_election/election folder I see this specific server repeated 3 times, while the other server are only mentioned twice.
45654861x41276x432-x.x.x.x:7070_solr-n_00000003xx
74030267x31685x368-x.x.x.x:7070_solr-n_00000003xx
74030267x31685x369-x.x.x.x:7070_solr-n_00000003xx
Not even sure if this is relevant. (Can it be?)
Any clue what other check can we do?
We've experienced this error under 2 conditions.
Condition 1
On a single zookeeper host there was an orphaned Zookeeper ephemeral node in
/overseer_elect/election. The session this ephemeral node was associated with no longer existed.
The orphaned ephemeral node cannot be deleted.
Caused by: https://issues.apache.org/jira/browse/ZOOKEEPER-2355
This condition will also be accompanied by a /overseer/queue directory that is clogged-up with queue items that are forever waiting to be processed.
To resolve the issue you must restart the Zookeeper node in question with the orphaned ephemeral node.
If after the restart you see Still seeing conflicting information about the leader of shard shard1 for collection <name> after 30 seconds
You will need to restart the Solr hosts as well to resolve the problem.
Condition 2
Cause: a mis-configured systemd service unit.
Make sure you have Type=forking and have PIDFile configured correctly if you are using systemd.
systemd was not tracking the PID correctly, it thought the service was dead, but it wasn't, and at some point 2 services were started. Because the 2nd service will not be able to start (as they both can't listen on the same port) it seems to just sit there in a failed state hanging, or fails to start the process but just messes up the other solr processes somehow by possibly overwriting temporary clusterstate files locally.
Solr logs reported the same error the OP posted.
Interestingly enough, another symptom was that zookeeper listed no leader for our collection in /collections/<name>/leaders/shard1/leader normally this zk node contains contents such as:
{"core":"collection-name_shard1_replica1",
"core_node_name":"core_node7",
"base_url":"http://10.10.10.21:8983/solr",
"node_name":"10.10.10.21:8983_solr"}
But the node is completely missing on the cluster with duplicate solr instances attempting to start.
This error also appeared in the Solr Logs:
HttpSolrCall null:org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /roles.json
To correct the issue, killall instances of solr (or java if you know it's safe), and restart the solr service.
We figured out!
The issue was that jetty didn't really stop so we had 2 running processes, for whatever reason this was fine for reading but not for writing.
Killing the older java process solved the issue.