LDAP Connections - ldap

I have a very basic question about the LDAP Protocol:
Can a client be connected for an undefined period of time or each authentication requires to open and close a tcp connection?

Professional-quality LDAP servers can be configured to terminate clients after a period of time, a maximum number of operations, or other conditions; or alternatively, leave the the client connected forever. Ask your LDAP server administrator whether client connections are being terminated for any of the conditions listed, or perhaps others.

In addition to what Terry says, professional quality LDAP client APIs use a connection pool to hide all these gory details from you; to keep connections open as long as possible; and to recover from situations where the server imposes a connection termination rule.

LDAP servers may implement multiple limits on the server side , The LDAP client APIs also provide options to set limits at the client side. Some of the server side limits are [ In case of Oracle DSEE]
Size limit - Number of searh result entries returned
Time limit - Time taken to process the request
Idle time Limit - How much time the connection can stay idle ? [keepalive at load balancers can keep the connection alive] . server access log marks connections closed because of idle time .
Lookthrough limit - Number of candidate entries to look through for a given ldap search
Client APIs may set it's own time and size limit

Related

how to make Equal number of Connections from Geode native client to Server?

I have a requirement where I have configured 'min-connctions= number of client thread * number of geode cache server'.
so that during load there will be not connection while accessing the connection or no new connection is needed if all threads start accessing to a single Server.
the problem is that it is locator who decide the how many connection client can create to a server JVM based on load probe.
I want to ignore everything just wanting equal number on Connections to all Cache Server JVMs.

Active connections on web farm

I'm trying to build a simple chat with websockets. I'm also displaying the current active users in the chat, and here is where the problems start: we use a web farm.
A user can connect through a loadbalancer with a server. When a new connection hits a server, it increases a counter in a SQL database and notifies the other servers in the farm through rabbit MQ.
All other servers fetch the new data and send that number back to their connected users.
If an user disconnects, the same will happen: The server decreases the counter in the SQL database and through rabbit MQ all other servers will know about this.
But, what will happen when a server dies? for example, If 10 users will be connected with this server. When that server goes down, all the users are disconnected, but that is not updated in the database anymore.
What's the best solution to get the total amount of active users in a web farm? And notifying the users when this amount has changed?
Thanks in advance!
Oh btw, we're using signalr
I think the typical way to deal with nodes asynchronously disconnecting from a mesh is to implement a heartbeat/keep-alive mechanism. In this case the heartbeat message would be between servers and there must also be an accessible record of which users are connected to which server. When a server does not produce a heartbeat for a period of time, then all other servers can update their records and mark all the users associated with the server as disconnected.
Looks like you may have a few options on how to keep track of users (SQL database or every server listens a Rabbit MQ message). As far as the heartbeat, you can implement it yourself or try to see if the laodbalancer's detection method can be utilized.

LDAP connection reset timer

I am creating LDAP connection using initialldapcontext.
I see there is option like
jndi.ldap.read.timeout - time to wait for read operation
jndi.ldap.connect.timeout - time to wait for connect operation.
I have a requirement where the ldap connection is active but still I have terminated it based on a timer.
For exmaple: there are three ldap server and there is time 5 min. So once a ldap connection is opened it has to be active only for 5 min then it should terminate and then reconnect.
It is some thing like maximum session time for a ldap session.
Whether there is any flag like jndi.ldap.read.timeout or jndi.ldap.connect.timeout for this purpose.
Thanks in advance.
I would suggest using a connection pool. The JNDI LDAP provider documentation shows that there is a system property called com.sun.jndi.ldap.connect.pool.timeout which would do the disconnect part for you. Not the automatic reconnect, though. However, establishing a new connection should not be very expensive (unless you really need to optimize for speed/scale).
The UnboundID LDAP SDK (disclaimer: I work for UnboundID) has more flexible options. See the LDAPConnectionPool class for more details.

constant connection to redis server

I've noticed that when I run a php script on my redis server (simple set / get) that it will load in under 1 ms. If I have two servers, a web server and a redis server, it will take a good 15 ms for the web server to connect, set, and get. Is there a way to make a constant connection between the two servers so I don't need to reconnect every single time the script is called?
It depends on the client library you are using to communicate with redis if it supports/creates a persistent connection or a pool of pre-created connections in order to save initial handshaking for each request.

Tomcat - Configuring maxThreads and acceptCount in Http connector

I currently have an application deployed using Tomcat that interacts with a Postgres database via JDBC. The queries are very expensive, so what I'm seeing is a timeout caused by Tomcat or Apache (Apache sits in front of Tomcat in my configuration). I'm trying to limit the connections to the database to 20-30 simultaneous connections, so that the database is not overwhelmed. I've done this using the \.. configuration, setting maxActive to 30 and maxIdle to 20. I also bumped up the maxWait.
In this scenario I'm limiting the USE of the database, but I want the connections/requests to be POOLED within Tomcat. Apache can accept 250 simultaneous requests. So I need to ensure Tomcat can also accept this many, but handle them appropriately.
Tomcat has two settings in the HTTP Connector config file:
maxThreads - "Max number of request processing threads to be created by the Http Connector, which therefore determines the max number of simultaneous requests that can be handled."
acceptCount - "The maximum queue length for incoming connection requests when all possible request processing threads are in use. Any requests received when the queue is full will be refused."
So I'm guessing that if I set maxThreads to the max number of JDBC connections (30), then I can set acceptCount to 250-30 = 220.
I don't quite understand the difference between a thread that is WAITING on a JDBC connection to open up from the pool, versus a thread that is queued... My thought is that a queued thread is consuming less cycles whereas a running thread, waiting on the JDBC pool, will be spending cycles checking the pool for a free thread...?
Note that the HTTP connector is for incoming HTTP requests, and unrelated to JDBC. You probably want to configure the JDBC connection pool separately, such as the connectionProperties for the JDBC connector:
http://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html
Unless your application handles request in a matter where it directly connects to the database on a per http connection basis, then you should configure your JDBC connection pool based on what your database software is set to/ can handle and your maxthreads to what your application / hardware can handle.
Keeping maxActive value (of db connection pooling) lesser than maxThreads (i.e. number of concurrent threads) makes sense in most of the cases. You can set acceptCount to a higher value depending upon what traffic you are expecting in your website and how fast one request can be processed.