Apache Ignite Cache - ignite

I have a client connected to two ignite servers in the cluster which acquires the lock on the object1 for the time specified (I acquired the lock for 15minutes on the object to test different scenarios), when we hit the server with the same request object on which there is a lock it should fail because the lock already exists on that object.
I have defined the cache on both the servers. I am testing the servers in the following order to check if the cache is shared among those servers if one of them is down.
Server1 is up & Server2 is up then Lock acquired on transaction object
server1 is down & Server2 is up then Failed (lock already present on object)
Server1 is up & Server2 is up then Failed (lock already present on object)
Server1 is up & Server2 is down then Success (acquired lock on object, as per my understanding this should have failed since the transaction is in process)
As per my understanding when the server1 is up and server2 is down the lock should be failed since it already exists, but it was a success.
Could you help me understand why the last case was a success and I would like to know how long does it take for the cache to be shared by both servers.

Related

How Redis Cluster connection using redis plus plus ensures connection consistency in case the corresponding node shuts down

How Redis Cluster connection using redis plus plus ensures connection consistency in case the corresponding node shuts down ?
The documentation in https://github.com/sewenew/redis-plus-plus#redis-cluster states that "You only need to set one master node's host & port in ConnectionOptions, and RedisCluster will get other nodes' info automatically (with the CLUSTER SLOTS command)"
If the IP and port used to create the connection somehow goes down after establishing the connection, and the corresponding slave is automatically upgraded to master by Redis, does redis-plus-plus takes care of this? Or we may expect an error ?
Yes, redis-plus-plus will take care of this.
Once RedisCluster is successfully constructed, it will get all nodes' IP and port with the CLUSTER SLOTS command. Also if the cluster topology changes, e.g. master down, new master elected, redis-plus-plus automatically updates the info.
However, there's a time window between old master is down, and new master is elected. In this case, if your command is sent to the broken master, you'll get exception. Everything will remove to normal, once the new master is elected.

Failover in SQL Server 2017 with Availibility Group

I'm configuring a failover in an above-mentioned setup with two nodes and synchronous commit. It's a test setup. Primary is server1 and secondary is server2. I configured the availability group and added the database.
When I manually execute a failover it seems to work, server2 becomes primary, but when I connect to cluster resource IP, the database remains in a read-only mode. It works only when I shutdown server1. So far I couldn't find any solution.

ActiveMQ takes a long time to failover

I have 3 ActiveMQ brokers in a networked Shared File System(GlusterFS)/Master Slave configuration - all in VMs.
If the master fails the client should failover to the new master.
The issue I have is that the connection to the new master takes about 50 seconds.
Is that reasonable?
How to improve it?
My client connection looks like this
failover:(tcp://a1:61616?connectionTimeout=1000,tcp://a2:61616?connectionTimeout=1000,tcp://a3:61616?connectionTimeout=1000)?randomize=false&maxReconnectDelay=10000&backup=true"
Also when disconnecting the master by disconnecting network cable it stops and throws an exception regarding the kahaDB (which is on GlusterFS) and needs to be restarted.
Is there a workaround for this behavior so the master broker auto-restarts or is able to connect automatically once the network comes back?
The failover depends on the time the underlying file system take for releasing the file lock.
In your case, the NFS cluster is waiting 50s to detect that the first node is lost and so release the lock on the kahadb file, wich can then be taken by the seconde node.
You can customize this delay with the NFSD_V4_GRACE and NFSD_V4_LEASE parameters in the NFS server configuration file (/etc/sysconfig/nfs on redhat/centos systems).
You can also customize the kahadb lockKeepAlivePeriod, see http://activemq.apache.org/pluggable-storage-lockers.html

How to load balance url request to a dedicated weblogic node?

For some performance issue, i need to process one kind of request in a dedicated node. For example, I need to process all request like http://hostname/report* on node1. So, I added a rule in load balancer to redirect http://hostname/report* to http://node1name/report*. But node1 ask me to login again. And I was logged in http://hostname/ already. How can I directly access without login again?
As #JoseK mentioned, it looks like you don't have session replication and failover configured between the servers. You will need all of your application servers to be inside the same WebLogic cluster and you will also have to pick their secondary session replication node to be the destination for in-memory replication. You can dictate this by assigning the dedicated node to a specific machine, which is then selected as the secondary replication target for all cluster members.
Also, for session replication to work, all objects within your session have to be/implement serializable.

How to troubleshoot issues caused by clustering or load balancing?

Hi I have a application that is deploy on two weblogic app servers
recently we have issue that for certain cases the user session returned is null. Developer feedback is that it could be caused by the session not replicating to the other server.
How do we prove if this is really the case?
Are you using a single session store that both application servers can access via some communication protocol? If not, then it is definitely the case. Think about it, if your weblogic servers are storing the session in memory anywhere, and having users pass their session id via cookies, than the other server has no way of accessing the memory on the other machine. Unless you are using sticky load balancing. Are you?
There's 2 concepts to consider here - Session stickiness and session replication.
Session Stickiness is a mechanism where weblogic server ensures that if a request from a user with session A goes to server 1 then the next request from user with session A will go to server 1 only.
This is achieved by configuring a hardware loadbalancer (like F5) which is capable of providing session stickiness. or configuring weblogic proxy installed on apache/iis/weblogic.
The first time a request reached WLS managed server, it responds with a session id and appends to it the JVM id of the server (this is the primary id), if the managed server is part of a cluster, it also attaches a secondary server jvm id (the secondary server is the server where the session is being replicated)
The proxy maintains a table of all JVM id's and corresponding IP of managed server, it also checks periodically if the servers are up and running or not.
The next time when another request passes the proxy with existing session id and a primary jvm id, the proxy parses this and tries to send the request to that server, if it cannot within some time it tries to send to secondary server.
Session Replication - This is enabled by default when you configure a WLS cluster with 2 or more managed server. Each time any data is updates to a session, its data is replication in a secondary server too.
So in your case if your application users are loosing session or getting redirected to login page between normal usage, then check that the session did not get invalidated because of a timeout, if you have defined a cluster and using WLS proxy then check the proxy debug output to make sure the primary and secondary server are being appended to the session id.
Finally there's a simple example in the sample application deployment of wls that you can use to test session replication and failover functionality.
So to prove why session is getting lost,
1) check server log to see if session got invalidated because of timeout,
2) if using wlproxy, enable debug, and the next time the issue happens check in the proxy log if the request was sent to a different server, and if that server is not the secondary server.