LoadTest on dummy server succeeds after setting Time to Live or TTL in HttpConnPool, but what does it do? - apache

What does the Time to Live (TTL) variable in the HttpConnPool.class from the package org.apache.http.impl.conn; do?
I was running some load tests on a dummy server. When I am passing close to 9 requests per second. I got random NoHttpResponseException, target failed to respond or dummy server failed to respond.
Then I added a property called "TTL" or "TimetoLive" and gave it a value. The HttpResponseException stopped arising. I would like to know what this variable does to prevent the NoHttpResponseException to arise in the first place.

Actually I have figured out the answer myself.
In my load testing, initially we got "NoHttpResponseException, target server #Somelink:PortNumber failed to respond." during loadtest because httpClient maintains persistent connections meaning one and same connection to send multiple requests. It is more efficient this way. There is an evictor thread which we have set for certain milliseconds or seconds. The evictor thread will remove idle connection after certain milliseconds. During production there is a possibility of having a idle connection as we do not have traffic all the time. Now during Load test, the connection will not be idle as we keep sending requests all the time to the client server. Hence the connection will not be evicted and the TTL property was set to Default value of "-1" which means infinite (This is for my application, for every application it depends on the value set by the developer).
TTL is the property that defines how long a connection must be active regardless if its idle or not. If the property is set to "-1", then the connection will remain active forever or at least until the client server closes it. The client server usually closes the connection after certain time. No server maintains a connection forever. A new connection will always be established.
During this time when the client close our connection, our server will assume that the connection is established but the client did not send a response. Hence it returns NoHttpResponseException i.e., the target server failed to respond. Adding TTL property will ensure to remove any persistent connection regardless if it is idle or not. Hence we will always have a new connection preventing an NoHttpResponseException.
I hope this helps.

Related

ADO.NET Pooled connections are unable to reuse

I'm working on an ASP.NET MVC application which use EF 6.x to work with my Azure SDL Database. Recently with an increased load app start to get into a state when it's unable to communicate with the SQL server anymore. I can see that there are 100 active connections to my database using exec sp_who and any new connection is unable to create with the following error:
System.Data.Entity.Core.EntityException: The underlying provider
failed on Open. ---> System.InvalidOperationException: Timeout
expired. The timeout period elapsed prior to obtaining a connection
from the pool. This may have occurred because of all pooled connections
were in use and max pool size was reached.
Most of the time app works with average active connection count from 10 to 20. And any load doesn't change this number... Event when load is high it stays at level 10-20. But in certain situations, it could just up to 100 in less than a minute without any ramp up time and this causes app state when all my requests are failing. All those 100 connection are in sleeping state and awaiting command.
The good part is I found a workaround which helped me to mitigate the issue - clear connection pool from the client side. I'm using SqlCoonection.ClearAllPools() and it instantly closing all the connections and sp_who shows me my regular 10-20 connection after that.
The bad part, I still don't know the root cause.
Just to clarify the app load is about 200-300 concurrent users, which generate 1000 requests per minute
With the great suggestion #DavidBrowne to track leaked connection with a simple pattern I was able to find leaked connections while configuring Owin engine
private void ConfigureOAuthTokenGeneration(IAppBuilder app)
{
// here in create method I'm creating also a connection leak tracker
app.CreatePerOwinContext(() => MyCoolDb.Create());
...
}
Basically with every request, Owin creates a connection and doesn't let it go and when the WebAPI load is increased I have troubles.
Could it be the real cause and I Owin smart enough to lazy create a connection when needed (using the function provided) and let it go when it was used?
It's very unlikely that this is caused by anything other than your application code leaking connections.
Here's a helper library you can use to track when a connection is leaked, and report the call site that initially opened the connection.
http://ssbwcf.codeplex.com/SourceControl/latest#SsbTransportChannel/SqlConnectionLifetimeTracker.cs

python twisted: enforcing a single connection per id

I have a twisted server using SSL sockets and using certificates to identify the different clients that connect to the server. I'd like to enforce the state where there is only one connection by each possible id. The two ways I can think of is to keep track of connected ids and then not allow a second connection by the same id or allow the second connection and immediately terminate the first. I'm trying to do the later but am having some issues (I'll explain my choice at the end)
I'm storing a list of connections in the factory class and then after the SSL handshake I compare the client's id with that list. If it's already in that list I try to call .transport.abortConnection() on it. I then want to do the normal things I do to record the new connection in my database. However, the call to abortConnection() doesn't seem to call connectionLost() directly which is where I do my cleanup and calls to the database to say that a connection was lost. So, my code then records that the id connected but later a call is made to connectionLost() resulting in the database appearing to have that id disconnected.
Is there some sort of way to block the incoming second connection from further processing until the first connection has finished processing the disconnection?
Choice explanation: The whole reason I'm doing this is I have clients behind NATs that appear to be changing their IP address on a fairly regular basis (once a every 1-3 days). The devices connecting will just have their connections uncleanly severed and then they try to reconnect with the new IP. However, my server isn't notified about the disconnect and usually has to timeout the connection. Before the server times out the connection, though, the client sometimes manages to reconnect and the server then is in a state with two apparent connections by the same client. So, typically the first connection is the one I really want to terminate.
Once you have determined the ID of the connection, you can call self.transport.pauseProducing() on the "new" connection's transport, which will prevent any notifications until you call self.transport.resumeProducing(). You can then call newConnection.transport.resumeProducing() from oldConnection.connectionLost(), if a new connection exists.

Difference between TTL and Keep alive

Can any one tell me the difference between TTL and Keep alive in sockets (C# Networking) and also Linger.. Thanks in advance.
TTL tells the packet how many routers he can go through before giving up, while Keep Alive tells the connexion how long it must be kept open without activity.
From what i read about Linger, i don't see the difference with keep-alive, i may be missing something here.
EDIT: The linger option allows you to close the socket while telling it to wait some time to see if data is still on the wire; from this page, we read that
There may still be data available in the outgoing network buffer after
you close the Socket. If you want to specify the amount of time that
the Socket will attempt to transmit unsent data after closing, create
a LingerOption with the enabled parameter set to true, and the seconds
parameter set to the desired amount of time. The seconds parameter is
used to indicate how long you would like the Socket to remain
connected before timing out. If you do not want the Socket to stay
connected for any length of time after closing, create a LingerOption
with the enabled parameter set to false. In this case, the Socket will
close immediately and any unsent data will be lost. Once created, pass
the LingerOption to the Socket.SetSocketOption method. If you are
sending and receiving data with a TcpClient, then pass the
LingerOption to the TcpClient.LingerState method.
Time to live is the number of devices (hops) a network packet may cross (like routers, switches etc) Keep alive time is the time the socket stays open when no data is being send or received

Is it possible to have asynchronous processing

I have a requirement where I need to send continuous updates to my clients. Client is browser in this case. We have some data which updates every sec, so once client connects to our server, we maintain a persistent connection and keep pushing data to the client.
I am looking for suggestions of this implementation at the server end. Basically what I need is this:
1. client connects to server. I maintain the socket and metadata about the socket. metadata contains what updates need to be send to this client
2. server process now waits for new client connections
3. One other process will have the list of all the sockets opened and will go through each of them and send the updates if required.
Can we do something like this in Apache module:
1. Apache process gets the new connection. It maintains the state for the connection. It keeps the state in some global memory and returns back to root process to signify that it is done so that it can accept the new connection
2. the Apache process though has returned the status to root process but it is also executing in parallel where it going through its global store and sending updates to the client, if any.
So can a Apache process do these things:
1. Have more than one connection associated with it
2. Asynchronously waiting for new connection and at the same time processing the previous connections?
This is a complicated and ineffecient model of updating. Your server will try to update clients that have closed down. And the server has to maintain all that client data and meta data (last update time, etc).
Usually, for continuous updates ajax is used in a polling model. The client has a javascript timer that when it fires, hits a service that provides updated data. The client continues to get updates at regular intervals without having to write an apache module.
Would this model work for your scenario?
More reasons to opt for poll instead of push
Periodic_Refresh
With a little patch to resume a SUSPENDED mpm_event connection, I've got an asynchronous Apache module working. With this you can do the improved polling:
javascript connects to Apache and asks for an update;
if there's no updates, then instead of answering immediately the module uses SUSPENDED;
some time later, after an update or a timeout happens, callback fires somewhere;
callback gives an update (or a "no updates" message) to the client and resumes the connection;
client goes to step 1, repeating the poll which with Keep-Alive will use the same connection.
That way the number of roundtrips between the client and the server can be decreased and the client receives the update immediately. (This is known as Comet's Reverse Ajax, AFAIK).

WCF: What happens if a channel is established but no method is called?

In my specific case: A WCF connection is established, but the only method with "IsInitiating=true" (the login method) is never called. What happens?
In case the connection is closed due to inactivity after some time: Which setting configures this timeout? Is there still a way for a client to keep the connection alive?
Reason for this question: I'm considering the above case as a possible security hole. Imagine many clients connecting to a server without logging in thus preventing other clients from connecting due to bandwidth problems or port shortage or lack of processing power or ...
Am I dreaming, or is this an actual issue?
The WCF client side proxy will close the connection (if open) when it goes out of scope, e.g. when the method it is being used in terminates.
If you're using sessions (but that only kicks in if you actually have indeed established a session - after a method has been called), there's a inactivityTimeout setting in the sessions, both on the client and the server side - the smaller value "wins", so to speak.
If your "concurrentSessions" settings is quite low on your server, this might be an issue - but again, this only kicks in when there is an actual session in place, e.g. at least one method has been called - and in that case, the inactivity timeout on the session will clear out those unused sessions as needed.