I need some clarity on flag forceServerMode flag of TcpDiscoverySpi. As per below documentation, DiscoverySPI will behave in same way for client node as it would for server:
If node is configured as client node (see IgniteConfiguration.clientMode) TcpDiscoverySpi starts in client mode as well. In this case node does not take its place in the ring, but it connects to random node in the ring (IP taken from IP finder configured) and use it as a router for discovery traffic. Therefore slow client node or its shutdown will not affect whole cluster. If TcpDiscoverySpi needs to be started in server mode regardless of IgniteConfiguration.clientMode, forceSrvMode should be set to true.
Does that mean:
Client node would become part of discovery ring? If Yes, would it impact overall Grid performance?
Does it impact near caches of client node if it has any?
What are the benefits if we make this flag true?
Added link for the documentation for reference:
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySpi.html
This option has already been deprecated. But let me answer all the questions.
It could potentially affect cluster stability and performance.
I believe it shouldn't impact anything in terms of functionality.
I don't see any major benefits of the property. Most likely you can start a standalone client without any running servers, but maybe not, honestly I haven't checked it.
In general I wouldn't recommend using it.
PS a similar question was asnwered a while ago.
Related
I see a two ways to setup client in Ignite:
then Ignition.start(IgniteConfiguration.clientMode = true)
Ignition.startClient
but can't find any details about the first mode in docs.
What's the difference between two ways?
The first one will start a thick client node that joins the cluster via an internal protocol, receive all of the cluster-wide updates such as topology changes, and support all of the Ignite APIs.
While the second one will start a Java Thin Client which is a lightweight client that connects to the existing cluster node and performs all operations through that node. The thin client does not become a part of the cluster topology.
You can find more details regarding the difference between the client types here.
The former starts a thick client, the latter a thin client.
A thick client is (mostly) a server node that’s configured not to store data or run compute tasks. So you get the full API, but it’s relatively heavy-weight.
A thin client does not become a cluster node, so it’s smaller and simpler. There are a few APIs that are not available.
If you can use a thin client, that’s likely the correct solution.
What is Ignite maintenance mode of Ignite, and how to change an ignite to this mode? I was stuck joining the node to the cluster and complains cleaning up the persistent data, however the data can be cleaned (using control.sh) only in the maintenance mode only.
This is a special mode, similar to running Windows in a safe mode after a crash or a data corruption where most of the cluster functionality is disabled and a user is asked to perform some maintenance task to resolve the issue, most straightforward example I can think of is - to clean (remove) some corrupted files on disk just like in your question. You can refer to IEP-53: Maintenance Mode proposal for the details.
I don't think that there is a way to enter this mode manually unless you trigger some preconfigured conditions like stopping a node in the middle of checkpointing with WAL disabled. Once the state is fixed, maintenance mode should be resolved automatically allowing a node to join the cluster.
Also, from my understanding, this mode is about a particular node rather than a complete cluster. I.e. you can have a 4-nodes cluster with only 1 node in maintenance mode, in that case, you have to run control.sh commands locally for the concrete failed node, not from another healthy node. If that's not the case, please provide more details or file a JIRA ticket because reported behavior looks quite broken to me.
We are running one of our services in a newly created kubernetes cluster. Because of that, we have now switched them from the previous "in-memory" cache to a Redis cache.
Preliminary tests on our application which exposes an API shows that we experience timeouts from our applications to the Redis cache. I have no idea why and it issue pops up very irregularly.
So I'm thinking maybe the reason for these timeouts are actually network related. Is it a good idea to put in affinity so we always run the Redis-cache on the same nodes as the application to prevent network issues?
The issues have not arisen during "very high load" situations so it's concerning me a bit.
This is an opinion question so I'll answer in an opinionated way:
Like you mentioned I would try to put the Redis and application pods on the same node, that would rule out wire networking issues. You can accomplish that with Kubernetes pod affinity. But you can also try nodeslector, that way you always pin your Redis and application pods to a specific node.
Another way to do this is to taint your nodes where you want to run your workloads and then add a toleration to the Redis and your application pods.
Hope it helps!
We are trying to prevent our application startups from just spinning if we cannot reach the remote cluster. From what I've read Force Server Mode states
In this case, discovery will happen as if all the nodes in topology
were server nodes.
What i want to know is:
Does this client then permanently act as a server which would run computes and store caching data?
If connection to the cluster does not happen at first, a later connection to an establish cluster cause issue with consistency? What would be the expect behavior with a Topology version mismatch? Id their potential for a split brain scenario?
No, it's still a client node, but behaves as a server on discovery protocol level. For example, it can start without any server nodes running.
Client node can never cause data inconsistency as it never stores the data. This does not depend forceServerMode flag.
I have configured 2 node clusers in Jboss AS 7.1.1-Final. I am planning to use sticky sessions. Meanwhile I am also recording number of active online users in Infinispan cache with node IP from where that user session was created for reporting purpose.
I have taken care of scenarios for login/logout where I would clear our cache entries. Problem is if one of the server node goes down, I need to write clean up routine to clear such records of that node from cache too.
One of the option is to write a client and check at specific interval if server is alive otherwise trigger a clean up routine. This approach would work but I am looking for more cleaner approach if I could detect server node failure that gets notified to other live nodes then I could hit cleanup.
From console I know that it shows when server goes down or comes up. But what would be that listerner to listen to such events. Any thoughts?
If you just need to know when the node leaves within some server module (inside JBoss server) you can use the ViewChanged listener
You cannot get this information on clients connected via REST or memcached protocols - with HotRod protocol it is doable but pretty hackish, you'd have to override TransportFactory.updateServers (probably just extend TcpTransportFactory - see configuration property infinispan.client.hotrod.transport_factory)