We are implementing a pattern where our client checks to see if a document exists in Redis, and if it does not, we then fetch the data from the database.
We are trying to handle a case where the Redis server is down or unreachable so we can then immediately fetch from the database.
However, when we test our code by intentionally taking down the Redis server, the call to Redis via the ServiceStack client does not timeout for approximately 20 seconds.
We tried using the RedisClient .SendTimeout property to various values (1000, 100, 1), but the timeout always happens after approx 20 seconds. We also tried using the .Ping() method but have the same problem.
Question: how can we handle the scenario where the Redis server is down and we want to switch to a DB fetch more quickly?
I had a similar problem sending e-mail: sometimes there's no answer and the build-in timeout (of SmtpClient) does nothing. Eventually I'd get a timeout which I believe comes from the underlying TCP/IP layer. I'd set the timeout in the client a little shorter than the "brutal timeout" on Task.Wait.
My solution was to wrap the call in a Task, and use a timeout on that:
// this special construct is to set a timeout (the SmtpClient timeout does not seem to work)
var task = Task.Factory.StartNew(() => SendEmail(request));
if (!task.Wait(6000))
Log.Error("Could not send mail to {0}. Timeout (probably on TCP layer).".Fmt(request.To));
Maybe something similar would work for you, just replace the SendEmail with a method that does the Redis thing.
You should not rely on the redis server to tell you how long the request should wait before flipping to plan B. Put this logic in the code actioning the request so that it is independent of how the redis server is set up
Related
We use C++ SQLDriverConnect to connect to our backend MS SQL Server. Via the attributes it is possible to set a parameter for a login timeout (SQL_ATTR_LOGIN_TIMEOUT), which, defines how long it takes before the SQLDriverConnect (which is kind of the login) takes.
Now, if I stop the SQL service on the server, this timeout is respected, and exactly after X seconds after calling the function I wil get my connect faillure, which is of course correct.
However, if I do not stop the service, but, disable the network adapter from the server, our pull out the network cable of the server, the SQLDriverConnect does not respect this timeout, and only returns a lot later.
For example, when setting the timeout to 5 seconds, the SQLDriverConnect only returns after 53 seconds if the SQL server is down.
I know this can be solved by making my own asynchronous connect, which I can always make return after X seconds, but, if possible, I would prefer not to do this and just use the provided functions and options to control the connection.
I assume the delay is caused by the network stack that is trying to find the host, but, my idea is that we use a higher level API with a timeout just to make sure we don't have to worry about those kind of things...
Any ideas on how this can be "fixed"?
thx in advance
Wim
Have you tried using SQL_ATTR_CONNECT_TIMEOUT?
I have a quite large "old" WCF Service with many different methods.
The most of these methods are "normal" so they should answer in less than 10 seconds but there are several methods (8 or 9) that are long processes so they can take a long time to get a response.
The receivetimeout and sendtimeout were set to 00:40:00 to ensure they had time enought to accomplish these processes.
The problem is sometimes we have connection issues and the "normal" methods take a really long time to crash...
They are all in the same service because they use a really big model and they wanted to reuse the model from the service in every call (not having a PersonsService.User and a RobotsService.User... because they are the same class in different services).
The first solution I imagine is to make a different Service with those long processes and set a short timeout to the normal service... but I should have to make a lot of changes because of Model use...
Is there any way to set a different timeout in each call? Or by service method? Should I chunk the Service anyway?
Thanks in advance!!
First of all, the timeout to configure in your case is OperationTimeout, which allows the time limit to wait for the service to reply before timing out. You can modify the operation timeout before making a call from the client side.
To set the OperationTimeout on the channel, you can type case your proxy/channel instance as IContextChannel and set OperationTimeout.
For example:
IClientChannel contextChannel = channel as IClientChannel;
contextChannel.OperationTimeout = TimeSpan.FromMinutes(10);
HTH,
Amit
Is it possible to determine the client timeout values on the server? I am in the unfortunate position that I have a long running WCF service (about 90 seconds) and I would like to know beforehand if the client is going to time out.
Any ideas?
Unless you force the client to tell you what his timeout is, you have no way of knowing that.
You could kindly ask for the information, adding a method parameter, or header.
You could also try to break your long running call into smaller parts, forcing the client to make subsequent calls if your business allows.
You could use asynchronous calls with a callback, one way method / duplex channels.
There are other possibilities, but we need to know more about your environment.
I have a requirement where I need to send continuous updates to my clients. Client is browser in this case. We have some data which updates every sec, so once client connects to our server, we maintain a persistent connection and keep pushing data to the client.
I am looking for suggestions of this implementation at the server end. Basically what I need is this:
1. client connects to server. I maintain the socket and metadata about the socket. metadata contains what updates need to be send to this client
2. server process now waits for new client connections
3. One other process will have the list of all the sockets opened and will go through each of them and send the updates if required.
Can we do something like this in Apache module:
1. Apache process gets the new connection. It maintains the state for the connection. It keeps the state in some global memory and returns back to root process to signify that it is done so that it can accept the new connection
2. the Apache process though has returned the status to root process but it is also executing in parallel where it going through its global store and sending updates to the client, if any.
So can a Apache process do these things:
1. Have more than one connection associated with it
2. Asynchronously waiting for new connection and at the same time processing the previous connections?
This is a complicated and ineffecient model of updating. Your server will try to update clients that have closed down. And the server has to maintain all that client data and meta data (last update time, etc).
Usually, for continuous updates ajax is used in a polling model. The client has a javascript timer that when it fires, hits a service that provides updated data. The client continues to get updates at regular intervals without having to write an apache module.
Would this model work for your scenario?
More reasons to opt for poll instead of push
Periodic_Refresh
With a little patch to resume a SUSPENDED mpm_event connection, I've got an asynchronous Apache module working. With this you can do the improved polling:
javascript connects to Apache and asks for an update;
if there's no updates, then instead of answering immediately the module uses SUSPENDED;
some time later, after an update or a timeout happens, callback fires somewhere;
callback gives an update (or a "no updates" message) to the client and resumes the connection;
client goes to step 1, repeating the poll which with Keep-Alive will use the same connection.
That way the number of roundtrips between the client and the server can be decreased and the client receives the update immediately. (This is known as Comet's Reverse Ajax, AFAIK).
Using techniques as hinted at in:
http://msdn.microsoft.com/en-us/library/system.servicemodel.servicecontractattribute.callbackcontract.aspx
I am implementing a ServerPush setup for my API to get realtime notifications from a server of events (no polling). Basically, the Server has a RegisterMe() and UnregisterMe() method and the client has a callback method called Announcement(string message) that, through the CallbackContract mechanisms in WCF, the server can call. This seems to work well.
Unfortunately, in this setup, if the Server were to crash or is otherwise unavailable, the Client won't know since it is only listening for messages. Silence on the line could mean no Announcements or it could mean that the server is not available.
Since my goal is to reduce polling rather than immediacy, I don't mind adding a void Ping() method on the Server alongside RegisterMe() and UnregisterMe() that merely exists to test connectivity of to the server. Periodically testing this method would, I believe, ensure that we're still connected (and also that no Announcements have been dropped by the transport, since this is TCP)
But is the Ping() method necessary or is this connectivity test otherwise available as part of WCF by default - like serverProxy.IsStillConnected() or something. As I understand it, the channel's State would only return Faulted or Closed AFTER a failed Ping(), but not instead of it.
2) From a broader perspective, is this callback approach solid? This is not for http or ajax - the number of connected clients will be few (tens of clients, max). Are there serious problems with this approach? As this seems to be a mild risk, how can I limit a slow/malicious client from blocking the server by not processing it's callback queue fast enough? Is there a kind of timeout specific to the callback that I can set without affecting other operations?
Your approach sounds reasonable, here are some links that may or may not help (they are not quite exactly related):
Detecting Client Death in WCF Duplex Contracts
http://tomasz.janczuk.org/2009/08/performance-of-http-polling-duplex.html
Having some health check built into your application protocol makes sense.
If you are worried about malicious clients, then add authorization.
The second link I shared above has a sample pub/sub server, you might be able to use this code. A couple things to watch out for -- consider pushing notifications via async calls or on a separate thread. And set the sendTimeout on the tcp binding.
HTH
I wrote a WCF application and encountered a similar problem. My server checked clients had not 'plug pulled' by periodically sending a ping to them. The actual send method (it was asynchronous being a server) had a timeout of 30 seconds. The client simply checked it received the data every 30 seconds, while the server would catch an exception if the timeout was reached.
Authorisation was required to connect to the server (by using the built-in feature of WCF that force the connecting person to call a particular method first) so from a malicious client perspective you could easily add code to check and ban their account if they do something suspicious, while disconnecting users who do not authenticate.
As the server I wrote was asynchronous, there wasn't any way to really block it. I guess that addresses your last point, as the asynchronous send method fires off the ping (and any other sending of data) and returns immediately. In the SendEnd method it would catch the timeout exception (sometimes multiple for the client) and disconnect them, without any blocking or freezing of the server.
Hope that helps.
You could use a publisher / subscriber service similar to the one suggested by Juval:
http://msdn.microsoft.com/en-us/magazine/cc163537.aspx
This would allow you to persist the subscribers if losing the server is a typical scenario. The publish method in this example also calls each subscribers on a separate thread, so a few dead subscribers will not block others...