I am testing losing connection in a Voximplant call. The Disconnected event fires after approximately one minute after losing connection. Can I adjust this value?
The server waits for the client to reconnect if the connection is lost. If you don't need this feature, you can send pings from the client via sendMessage and check for them for example each 2-3 seconds. If there are no pings, then use hangup
Related
What does the Time to Live (TTL) variable in the HttpConnPool.class from the package org.apache.http.impl.conn; do?
I was running some load tests on a dummy server. When I am passing close to 9 requests per second. I got random NoHttpResponseException, target failed to respond or dummy server failed to respond.
Then I added a property called "TTL" or "TimetoLive" and gave it a value. The HttpResponseException stopped arising. I would like to know what this variable does to prevent the NoHttpResponseException to arise in the first place.
Actually I have figured out the answer myself.
In my load testing, initially we got "NoHttpResponseException, target server #Somelink:PortNumber failed to respond." during loadtest because httpClient maintains persistent connections meaning one and same connection to send multiple requests. It is more efficient this way. There is an evictor thread which we have set for certain milliseconds or seconds. The evictor thread will remove idle connection after certain milliseconds. During production there is a possibility of having a idle connection as we do not have traffic all the time. Now during Load test, the connection will not be idle as we keep sending requests all the time to the client server. Hence the connection will not be evicted and the TTL property was set to Default value of "-1" which means infinite (This is for my application, for every application it depends on the value set by the developer).
TTL is the property that defines how long a connection must be active regardless if its idle or not. If the property is set to "-1", then the connection will remain active forever or at least until the client server closes it. The client server usually closes the connection after certain time. No server maintains a connection forever. A new connection will always be established.
During this time when the client close our connection, our server will assume that the connection is established but the client did not send a response. Hence it returns NoHttpResponseException i.e., the target server failed to respond. Adding TTL property will ensure to remove any persistent connection regardless if it is idle or not. Hence we will always have a new connection preventing an NoHttpResponseException.
I hope this helps.
When subscribing to real-time notifications, I go through the normal handshake, subscribe, connect flow.
Once the connection returns with events, I reconnect and wait for the next response to return. My question is:
If events are generated the first response and the next reconnect, could they be lost?
As an example: A synchronous application which processes returned response data after it returns and only reconnects once the data processing has finished could cause a significant delay between the response and the next reconnect. Are the cumulocity events generated during that delay buffered in the real-time queue for that particular client id or are they just lost?
Another possible example is when the client ID is no longer valid (this seems to happen every day at midnight), I have to resubscribe, causing a period of time during which no one is subscribed.
The client ID that you receive when handshaking is connected to a queue on the server side. That queue keeps all notifications that you are not able to receive until the next connect. It delivers them when you reconnect. (Try it out with Postman: After a connect returns, send a couple of events, then connect again. You will notice that you will get all events at once.)
However, as you noticed, the queue is not kept forever. If you are not able to reconnect within two hours (I believe), the queue is thrown away in order to not block server resources. This is what you noticed. In that case, you need to query the database to determine any missed events (e.g., poll any operations in pending state from devices).
Can any one tell me the difference between TTL and Keep alive in sockets (C# Networking) and also Linger.. Thanks in advance.
TTL tells the packet how many routers he can go through before giving up, while Keep Alive tells the connexion how long it must be kept open without activity.
From what i read about Linger, i don't see the difference with keep-alive, i may be missing something here.
EDIT: The linger option allows you to close the socket while telling it to wait some time to see if data is still on the wire; from this page, we read that
There may still be data available in the outgoing network buffer after
you close the Socket. If you want to specify the amount of time that
the Socket will attempt to transmit unsent data after closing, create
a LingerOption with the enabled parameter set to true, and the seconds
parameter set to the desired amount of time. The seconds parameter is
used to indicate how long you would like the Socket to remain
connected before timing out. If you do not want the Socket to stay
connected for any length of time after closing, create a LingerOption
with the enabled parameter set to false. In this case, the Socket will
close immediately and any unsent data will be lost. Once created, pass
the LingerOption to the Socket.SetSocketOption method. If you are
sending and receiving data with a TcpClient, then pass the
LingerOption to the TcpClient.LingerState method.
Time to live is the number of devices (hops) a network packet may cross (like routers, switches etc) Keep alive time is the time the socket stays open when no data is being send or received
I have a requirement where I need to send continuous updates to my clients. Client is browser in this case. We have some data which updates every sec, so once client connects to our server, we maintain a persistent connection and keep pushing data to the client.
I am looking for suggestions of this implementation at the server end. Basically what I need is this:
1. client connects to server. I maintain the socket and metadata about the socket. metadata contains what updates need to be send to this client
2. server process now waits for new client connections
3. One other process will have the list of all the sockets opened and will go through each of them and send the updates if required.
Can we do something like this in Apache module:
1. Apache process gets the new connection. It maintains the state for the connection. It keeps the state in some global memory and returns back to root process to signify that it is done so that it can accept the new connection
2. the Apache process though has returned the status to root process but it is also executing in parallel where it going through its global store and sending updates to the client, if any.
So can a Apache process do these things:
1. Have more than one connection associated with it
2. Asynchronously waiting for new connection and at the same time processing the previous connections?
This is a complicated and ineffecient model of updating. Your server will try to update clients that have closed down. And the server has to maintain all that client data and meta data (last update time, etc).
Usually, for continuous updates ajax is used in a polling model. The client has a javascript timer that when it fires, hits a service that provides updated data. The client continues to get updates at regular intervals without having to write an apache module.
Would this model work for your scenario?
More reasons to opt for poll instead of push
Periodic_Refresh
With a little patch to resume a SUSPENDED mpm_event connection, I've got an asynchronous Apache module working. With this you can do the improved polling:
javascript connects to Apache and asks for an update;
if there's no updates, then instead of answering immediately the module uses SUSPENDED;
some time later, after an update or a timeout happens, callback fires somewhere;
callback gives an update (or a "no updates" message) to the client and resumes the connection;
client goes to step 1, repeating the poll which with Keep-Alive will use the same connection.
That way the number of roundtrips between the client and the server can be decreased and the client receives the update immediately. (This is known as Comet's Reverse Ajax, AFAIK).
I am developing a client/server application with net tcp binding and I need to be notified if my connection to server goes down.
From server-side if a client disconnects, i can detect it instantly with CommunicationObject. Faulted event (with reliable session off). However, from Client side, it seems I have no way to know if server goes down. Same event doesn't fire. By the way I am setting receiveTimeout to infinite. Some people suggested a heartbeat or ping function to check if server is alive. But i think at WCF level such methodologies have big impacts. After all it's not a simple packet you send , it's the whole WCF request. What should I do ?
There seems to be a common misconception that, in order to find out on the client side whether a WCF session is still alive, one has to implement some kind of custom ping or heartbeat operation on the service. However, the WCF framework, when configured correctly, already does this for you in the background.
The trick is to set the ReliableSession.InactivityTimeout to a period that is short enough. For instance, if you set it to 30 seconds, then the ICommunicationObject.Faulted event will be raised on the client proxy after 30 (minimum) to appr. 45 (maximum) seconds after a service breakdown. The exact delay depends on the rhythm of the WCF-internal session keep-alive control timer and the specific time of the breakdown.
Of course, this can only work for reliable-session capable bindings, combined with the right session properties (ServiceContractAttribute.SessionMode, ServiceBehaviorAttribute.InstanceContextMode, OperationContractAttribute.IsInitiating, and OperationContractAttribute.IsTerminating).