I've noticed that when I run a php script on my redis server (simple set / get) that it will load in under 1 ms. If I have two servers, a web server and a redis server, it will take a good 15 ms for the web server to connect, set, and get. Is there a way to make a constant connection between the two servers so I don't need to reconnect every single time the script is called?
It depends on the client library you are using to communicate with redis if it supports/creates a persistent connection or a pool of pre-created connections in order to save initial handshaking for each request.
Related
my application (Node.js) is using moleculer for microservices and redis as transporter. However, I find that the application will have this log Redis-pub client is disconnected every 10 minutes, then reconnect with the log Redis-pub client is connected after a few seconds. This is a problem because if a client send a moleculer action during this time, it will fail.
Any idea what is causing this? Let me know if more information is needed.
Azure Cache for Redis currently has a 10-minute idle timeout for connections, so the idle timeout setting in your client application should be less than 10 minutes. Most common client libraries have a configuration setting that allows client libraries to send Redis PING commands to a Redis server automatically and periodically. However, when using client libraries without this type of setting, customer applications themselves are responsible for keeping the connection alive.
More info: https://learn.microsoft.com/en-us/azure/azure-cache-for-redis/cache-best-practices-connection#idle-timeout
I have a requirement where I have configured 'min-connctions= number of client thread * number of geode cache server'.
so that during load there will be not connection while accessing the connection or no new connection is needed if all threads start accessing to a single Server.
the problem is that it is locator who decide the how many connection client can create to a server JVM based on load probe.
I want to ignore everything just wanting equal number on Connections to all Cache Server JVMs.
How can we stop a client from reconnecting to the server after some retries.
In our case (in memory DB for fast retrieval), we have used Ignite and Oracle in parallel so that if Ignite server is down, then I could get my data from Oracle.
But when I start my application (while the Ignite server node is down for some reason), my application always waiting until it connects to server.
Console message:
Failed to connect to any address from IP finder (will retry to join topology every 2000 ms; change 'reconnectDelay' to configure the frequency of retries):
There is a TcpDiscoverySpi.joinTimeout property, which does exactly what you want: https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/spi/discovery/tcp/TcpDiscoverySpi.html#setJoinTimeout-long-
By default, it's not defined, so, node will try to reconnect endlessly.
I have a Delphi XE2 DataSnap server (Windows service) connected to a backend MS SQL Server 2008 (same server box) serving REST client requests.
Everything has been working great for some time until recently, I had an issue where for some reason the DataSnap service lost connection to the SQL Server.
The service failed to re-establish a connection and I had to restart the DataSnap service to continue.
This got me thinking because currently the service only uses 1 SQL connection (TADOConnection) shared for all the client requests. I did this because I didn't want the overhead of instantiating a new SQL connection for every client request.
I'm considering whether it actually would be better to have a separate SQL connection for each request and if the overhead would be noticeable - can anybody comment/advise on this?
This is where having a well-constructed Data Access Layer that can be modified to try different approaches and isolates your db connection from the rest of your code is really useful.
The pooling approach (as suggested by mjn) is strongly recommended if you're using MIDAS (DataSnap) from your clients to your DataSnap server as I've found it has a large connection overhead.
I've built a few web servcies (fairly low traffic) that use a plain TADOConnection at run-time and have found the overhead of establishing a database connection to be negligible, certainly compared with the overall network latency from the device to the server and back.
If you found TADOConnection still gave too much overhead in a high-traffic environment you could easily add your own connection pooling as above to such a system.
Hi I have a application that is deploy on two weblogic app servers
recently we have issue that for certain cases the user session returned is null. Developer feedback is that it could be caused by the session not replicating to the other server.
How do we prove if this is really the case?
Are you using a single session store that both application servers can access via some communication protocol? If not, then it is definitely the case. Think about it, if your weblogic servers are storing the session in memory anywhere, and having users pass their session id via cookies, than the other server has no way of accessing the memory on the other machine. Unless you are using sticky load balancing. Are you?
There's 2 concepts to consider here - Session stickiness and session replication.
Session Stickiness is a mechanism where weblogic server ensures that if a request from a user with session A goes to server 1 then the next request from user with session A will go to server 1 only.
This is achieved by configuring a hardware loadbalancer (like F5) which is capable of providing session stickiness. or configuring weblogic proxy installed on apache/iis/weblogic.
The first time a request reached WLS managed server, it responds with a session id and appends to it the JVM id of the server (this is the primary id), if the managed server is part of a cluster, it also attaches a secondary server jvm id (the secondary server is the server where the session is being replicated)
The proxy maintains a table of all JVM id's and corresponding IP of managed server, it also checks periodically if the servers are up and running or not.
The next time when another request passes the proxy with existing session id and a primary jvm id, the proxy parses this and tries to send the request to that server, if it cannot within some time it tries to send to secondary server.
Session Replication - This is enabled by default when you configure a WLS cluster with 2 or more managed server. Each time any data is updates to a session, its data is replication in a secondary server too.
So in your case if your application users are loosing session or getting redirected to login page between normal usage, then check that the session did not get invalidated because of a timeout, if you have defined a cluster and using WLS proxy then check the proxy debug output to make sure the primary and secondary server are being appended to the session id.
Finally there's a simple example in the sample application deployment of wls that you can use to test session replication and failover functionality.
So to prove why session is getting lost,
1) check server log to see if session got invalidated because of timeout,
2) if using wlproxy, enable debug, and the next time the issue happens check in the proxy log if the request was sent to a different server, and if that server is not the secondary server.