Redis client servicestack Timeout - redis

I'm using ServiceStack RedisClient for caching. How can I set a timeout? For example if the result is longer than 5 secs to return null?
Anyone knows?
Thanks

There are some operations like blocking LPOP/RPOP that includes a timeout.
In general redis runs in memory and is extremely fast so its rare that it timesout on its own. However the Network can be down so RedisNativeClient (the base class for RedisClient) includes a SendTimeout which you can set to do this.

Related

Unexplained latency with ValueOperations using Jedis

We have Spring Boot web services hosted on AWS. They make frequent calls to a Redis Cluster cache using Jedis.
During load testing, we're seeing increased latency around ValueOperations that we're having trouble figuring out.
The method we've zoomed in on does two operations, a get followed by an expire.
public MyObject get(String key) {
var obj = (MyObject)valueOps.get(key);
if (obj != null) {
valueOps.getOperations().expire(key, TIMEOUT_S, TimeUnit.SECONDS)
}
}
Taking measurements on our environment, we see that it takes 200ms to call "valueOps.get" and another 160ms calling "expire", which isn't an acceptable amount of latency.
We've investigated these leads:
Thread contention. We don't currently suspect this. To test, we configured our JedisConnectionFactory with a JedisPoolConfig that has blockWhenExhausted=true and maxWaitMs=100, which if I understand correctly, means that if the connection pool is empty, a thread will block for 100ms waiting for a connection to be released before it fails. We had 0 failures running a load test with these settings.
Slow deserializer. We have our Redis client configured to use GenericJackson2JsonRedisSerializer. We see latency with the "expire" call, which we don't expect has to use the deserializer at all.
Redis latency. We used Redis Insights to inspect our cluster, and it's not pegged on memory or CPU when the load test is running. We also examined slowlog, and our slowest commands are not related to this operation (our slowest commands are at 20ms, which we're going to investigate).
Does anyone have any ideas? Even a "it could be this" would be appreciated.

How is the mechanism of releasing TCP port in HTTPClientFactory?

In my current version of a project, the HttpClinet has been used for creating requests. But in request's peak time most of our TCP ports get in waited status, Their status remains about 2 minutes event after task completion. I read some articles about IHttpClientFactory.
But I'm not sure how this solution can solve our problems. Any ideas will be appreciated.
There are tonnes of articles out there which would tell you why you should not dispose HttpClient (which would result in what you mentioned, and end-up with socket exhaustion issue), but rather to use IHttpClientFactory to manage the life-cycle of of HttpClient-related services.
This is because each time you make a request with HttpClient, and dispose it after usage, would cause the socket to be in a TIME_WAIT state, and imagine if you make few thousands of requests in few seconds, you will run out of sockets. The IHttpClientFactory is a contract to better manage your Http services, and to re-use sockets from the connection pool without you having to manage it.
As a start, go through this, I think it provides sufficient info. about what you want to achieve,
https://learn.microsoft.com/en-us/dotnet/architecture/microservices/implement-resilient-applications/use-httpclientfactory-to-implement-resilient-http-requests
One of the key point in the article above, to answer your question about how IHttpClientFactory can solve your issue,
Each time you get an HttpClient object from the IHttpClientFactory, a
new instance is returned. But each HttpClient uses an
HttpMessageHandler that's pooled and reused by the IHttpClientFactory
to reduce resource consumption, as long as the HttpMessageHandler's
lifetime hasn't expired.
Hope this helps!

Change timeout for each WCF method or call

I have a quite large "old" WCF Service with many different methods.
The most of these methods are "normal" so they should answer in less than 10 seconds but there are several methods (8 or 9) that are long processes so they can take a long time to get a response.
The receivetimeout and sendtimeout were set to 00:40:00 to ensure they had time enought to accomplish these processes.
The problem is sometimes we have connection issues and the "normal" methods take a really long time to crash...
They are all in the same service because they use a really big model and they wanted to reuse the model from the service in every call (not having a PersonsService.User and a RobotsService.User... because they are the same class in different services).
The first solution I imagine is to make a different Service with those long processes and set a short timeout to the normal service... but I should have to make a lot of changes because of Model use...
Is there any way to set a different timeout in each call? Or by service method? Should I chunk the Service anyway?
Thanks in advance!!
First of all, the timeout to configure in your case is OperationTimeout, which allows the time limit to wait for the service to reply before timing out. You can modify the operation timeout before making a call from the client side.
To set the OperationTimeout on the channel, you can type case your proxy/channel instance as IContextChannel and set OperationTimeout.
For example:
IClientChannel contextChannel = channel as IClientChannel;
contextChannel.OperationTimeout = TimeSpan.FromMinutes(10);
HTH,
Amit

How to implement ServiceStack Redis Client with timeout

We are implementing a pattern where our client checks to see if a document exists in Redis, and if it does not, we then fetch the data from the database.
We are trying to handle a case where the Redis server is down or unreachable so we can then immediately fetch from the database.
However, when we test our code by intentionally taking down the Redis server, the call to Redis via the ServiceStack client does not timeout for approximately 20 seconds.
We tried using the RedisClient .SendTimeout property to various values (1000, 100, 1), but the timeout always happens after approx 20 seconds. We also tried using the .Ping() method but have the same problem.
Question: how can we handle the scenario where the Redis server is down and we want to switch to a DB fetch more quickly?
I had a similar problem sending e-mail: sometimes there's no answer and the build-in timeout (of SmtpClient) does nothing. Eventually I'd get a timeout which I believe comes from the underlying TCP/IP layer. I'd set the timeout in the client a little shorter than the "brutal timeout" on Task.Wait.
My solution was to wrap the call in a Task, and use a timeout on that:
// this special construct is to set a timeout (the SmtpClient timeout does not seem to work)
var task = Task.Factory.StartNew(() => SendEmail(request));
if (!task.Wait(6000))
Log.Error("Could not send mail to {0}. Timeout (probably on TCP layer).".Fmt(request.To));
Maybe something similar would work for you, just replace the SendEmail with a method that does the Redis thing.
You should not rely on the redis server to tell you how long the request should wait before flipping to plan B. Put this logic in the code actioning the request so that it is independent of how the redis server is set up

Performance of WCF with net.tcp

I have a WCF net.tcp service hosted with the builtin ServiceHost, and when doing some stress tests I get a strange behavior. The first time i send a bunch of requests, 5 to 10 requests are answered quickly, and the rest are returning at about 2 second intervals. The second time i send the requests, 10 - 20 are returned quickly, and rest with 2 sencond intervals.
The above repeats until I can get over 100 requests returned quickly, but if I wait a minute or so the memory usage of the service goes down and the requests go back to 5-10 returning quick.
The service I am testing has a small delay, so that I can get many open connections at the same time, if this delay is removed the requests return so quickly that i have perhaps 2-5 connections open at the same time. This delay is to simulate DB connections and other outgoing stuff.
From the behavior it looks like the ServiceHost is allocating something, threads, class instances, but I can not figure out what it is.
I could have a timer in the client that calls the service to keep it working, but that seems like a bad solution.
If I have a high sustained load to the service it will crunch all requests quickly, but if I have a period of low activity and then a surge of connections comes in the service will be slow.
I guess my question is WHAT is it the get allocated during high load of the WCF service, and HOW can I configure the service to preallocate more of the things that get allocated.
EDIT:
I did some more testing, and looking at the taskmgr for the process I can see that when the servicehost is 'resting' there are 10 threads open, but when I start sending requests, the threadcount goes up. As long as the threadcount is high the servicehost can process incoming requests quickly, but if I pause sending the requests, the open threadcount decreases, and subsequent requests starts taking longer time to process.
Now, how can I tell the servicehost to keep a bunch of threads open? Or more than the 10-12 that it keeps by default?
Well, after lots of googling, it seems that the problem is the threadpool. The CLR threadpool allocates a few threads, and when they are used, it throttles the creation of new threads, and after a time it also deallocates unused threads.
There is some confusion about a bug that meant that the ThreadPool did not honor the SetMinThreads call.
http://www.michaelckennedy.net/blog/PermaLink,guid,708ee9c0-a1fd-46e5-8fa0-b1894ad6ce0f.aspx
I am not sure if this bug is solved, or what, because when I modify the ThreadPool settings, the problem persists.
The thing that determines how may request are handled simultaneous is the ServiceThrottlingBehavior. There are a number of different threasholds that will limit the amount of request being processed. This also depends on the binding your are using, for example wsHttpBinding defaults to sessions on while basicHttpBinding uses no sessions and the default session limit of 10 is no problem.
See http://msdn.microsoft.com/en-us/library/ms735114.aspx for more details.
The bug you referenced is fixed in .NET 3.5 SP1. That may have had something to do with the problem, I think it's more likely (much more likely) that throttling is your problem rather than thread as Maurice keyed into.
<system.serviceModel>
<service name="???" >
<endpoint ... />
</service>
</system.serviceModel>
What's the throttle limit for this "empty" config? 10 session, 16 concurrent calls! Beware.
Here's more on the threading:
http://www.michaelckennedy.net/blog/2008/08/20/ThreadPoolBugInNET20SP1IsFixed.aspx
This feels like a hack but it seems to solve your issue. The problem is that the threadpool will take time to start up a new thread, so what you really need is threads waiting on standby. Add a constructor to your service and set the minimum number of threads you would like.
public YourService()
{
int workerThreads;
int portThreads;
ThreadPool.GetMinThreads(out workerThreads, out portThreads);
ThreadPool.SetMinThreads(200, portThreads);
}