how to make Equal number of Connections from Geode native client to Server? - gemfire

I have a requirement where I have configured 'min-connctions= number of client thread * number of geode cache server'.
so that during load there will be not connection while accessing the connection or no new connection is needed if all threads start accessing to a single Server.
the problem is that it is locator who decide the how many connection client can create to a server JVM based on load probe.
I want to ignore everything just wanting equal number on Connections to all Cache Server JVMs.

Related

How to stop client from reconnecting to server when the server is down?

How can we stop a client from reconnecting to the server after some retries.
In our case (in memory DB for fast retrieval), we have used Ignite and Oracle in parallel so that if Ignite server is down, then I could get my data from Oracle.
But when I start my application (while the Ignite server node is down for some reason), my application always waiting until it connects to server.
Console message:
Failed to connect to any address from IP finder (will retry to join topology every 2000 ms; change 'reconnectDelay' to configure the frequency of retries):
There is a TcpDiscoverySpi.joinTimeout property, which does exactly what you want: https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySpi.html#setJoinTimeout-long-
By default, it's not defined, so, node will try to reconnect endlessly.

constant connection to redis server

I've noticed that when I run a php script on my redis server (simple set / get) that it will load in under 1 ms. If I have two servers, a web server and a redis server, it will take a good 15 ms for the web server to connect, set, and get. Is there a way to make a constant connection between the two servers so I don't need to reconnect every single time the script is called?
It depends on the client library you are using to communicate with redis if it supports/creates a persistent connection or a pool of pre-created connections in order to save initial handshaking for each request.

Tomcat - Configuring maxThreads and acceptCount in Http connector

I currently have an application deployed using Tomcat that interacts with a Postgres database via JDBC. The queries are very expensive, so what I'm seeing is a timeout caused by Tomcat or Apache (Apache sits in front of Tomcat in my configuration). I'm trying to limit the connections to the database to 20-30 simultaneous connections, so that the database is not overwhelmed. I've done this using the \.. configuration, setting maxActive to 30 and maxIdle to 20. I also bumped up the maxWait.
In this scenario I'm limiting the USE of the database, but I want the connections/requests to be POOLED within Tomcat. Apache can accept 250 simultaneous requests. So I need to ensure Tomcat can also accept this many, but handle them appropriately.
Tomcat has two settings in the HTTP Connector config file:
maxThreads - "Max number of request processing threads to be created by the Http Connector, which therefore determines the max number of simultaneous requests that can be handled."
acceptCount - "The maximum queue length for incoming connection requests when all possible request processing threads are in use. Any requests received when the queue is full will be refused."
So I'm guessing that if I set maxThreads to the max number of JDBC connections (30), then I can set acceptCount to 250-30 = 220.
I don't quite understand the difference between a thread that is WAITING on a JDBC connection to open up from the pool, versus a thread that is queued... My thought is that a queued thread is consuming less cycles whereas a running thread, waiting on the JDBC pool, will be spending cycles checking the pool for a free thread...?
Note that the HTTP connector is for incoming HTTP requests, and unrelated to JDBC. You probably want to configure the JDBC connection pool separately, such as the connectionProperties for the JDBC connector:
http://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html
Unless your application handles request in a matter where it directly connects to the database on a per http connection basis, then you should configure your JDBC connection pool based on what your database software is set to/ can handle and your maxthreads to what your application / hardware can handle.
Keeping maxActive value (of db connection pooling) lesser than maxThreads (i.e. number of concurrent threads) makes sense in most of the cases. You can set acceptCount to a higher value depending upon what traffic you are expecting in your website and how fast one request can be processed.

weblogic jsessionid

I run Weblogic 10.3 locally and have a question about the sessionId that it generates. When i print session.getId() i see something that resembles this:
BBp9TAACMTglQ2TDFAKR4tpyXg73LZDQJ2PtT9x8htG1tWY122aa!869187422!1308677666322
what are these exclamation points and what follows it, specifically the second pair: !1308677666322 ? It looks like sometimes the server appends it and sometimes it doesn't. I believe weblogic appends it if I use the same browser to login to my app for the second time. Is this cookie related somehow?
Looking at some randomly generated Weblogic JSessionIDs from my own application
BrYx4hyPZ4VSP9Wo4eU0OrqmhXMLFONbRHnpLFwRKZ9MSaf6wvYj!-314662473
and
BrYiFED29itaC4EBpWYM8RKVQQauHkvnTsA2OAKUPZXVc9oUD5fB!-784323496.
Now if you notice the part of the session id after the first ! i.e 314662473 and 784323496.
This number is the unique identifier that Weblogic gives to the running JVM i.e. the running Weblogic server.
If there is more than one server in your application, Weblogic knows how to route your session back to the correct server by using this 9 digit JVM number which is part of the session ID.
Each time you restart the weblogic server, it will generate a new JVM id and use it as long as that weblogic server is running. So any hits to that server will have the same ID at the end of the session ID.
The format of the session ID is:
JSESSIONID=SESSION_ID!PRIMARY_JVMID_HASH!SECONDARY_JVM_HASH!CREATION_TIME
So if the primary is not available, it will try to jump over to secondary and if you have enabled session replication - then the session data can be recovered.
If you are running only a single server on local, then the format is simply
JSESSIONID=SESSION_ID!PRIMARY_JVMID_HASH!CREATION_TIME
regarding some times it does not appear, I've seen it is usually a browser dependent whether the sessionid is shown in the address bar or not
WebLogic Server use those IDs to maintain HTTP Session Affinity in the WebLogic Cluster In-Memory Replication model.
For those Web applications with the HTTP session replication enabled (at weblogic.xml deployment descriptor and disabled by default) WebLogic will keep a primary and a backup copy of your HTTP Session with the cluster.
To avoid cluster overhead, the WebLogic Proxy Plug-In (deployed in your Web Tier Layer) parse the session cookie and redirects every request to the WLS hosting your primary copy. In case of failure or overhead of the managed server hosting the primary session, the Proxy Plug-In redirects the request to the instance where your HTTP Session resides.
The Proxy Plugin will track a dynamic list of all the WebLogic Cluster Members as pairs (JVM IDs / IP:ports) to redirect every request appropriately.
If your app don't enable the in-memory replication feature your cookie will only include the JVM ID where your HTTP Sesion lives (the primary and unique copy).

How to troubleshoot issues caused by clustering or load balancing?

Hi I have a application that is deploy on two weblogic app servers
recently we have issue that for certain cases the user session returned is null. Developer feedback is that it could be caused by the session not replicating to the other server.
How do we prove if this is really the case?
Are you using a single session store that both application servers can access via some communication protocol? If not, then it is definitely the case. Think about it, if your weblogic servers are storing the session in memory anywhere, and having users pass their session id via cookies, than the other server has no way of accessing the memory on the other machine. Unless you are using sticky load balancing. Are you?
There's 2 concepts to consider here - Session stickiness and session replication.
Session Stickiness is a mechanism where weblogic server ensures that if a request from a user with session A goes to server 1 then the next request from user with session A will go to server 1 only.
This is achieved by configuring a hardware loadbalancer (like F5) which is capable of providing session stickiness. or configuring weblogic proxy installed on apache/iis/weblogic.
The first time a request reached WLS managed server, it responds with a session id and appends to it the JVM id of the server (this is the primary id), if the managed server is part of a cluster, it also attaches a secondary server jvm id (the secondary server is the server where the session is being replicated)
The proxy maintains a table of all JVM id's and corresponding IP of managed server, it also checks periodically if the servers are up and running or not.
The next time when another request passes the proxy with existing session id and a primary jvm id, the proxy parses this and tries to send the request to that server, if it cannot within some time it tries to send to secondary server.
Session Replication - This is enabled by default when you configure a WLS cluster with 2 or more managed server. Each time any data is updates to a session, its data is replication in a secondary server too.
So in your case if your application users are loosing session or getting redirected to login page between normal usage, then check that the session did not get invalidated because of a timeout, if you have defined a cluster and using WLS proxy then check the proxy debug output to make sure the primary and secondary server are being appended to the session id.
Finally there's a simple example in the sample application deployment of wls that you can use to test session replication and failover functionality.
So to prove why session is getting lost,
1) check server log to see if session got invalidated because of timeout,
2) if using wlproxy, enable debug, and the next time the issue happens check in the proxy log if the request was sent to a different server, and if that server is not the secondary server.