How to monitor Spring-data-redis pool metrics? - redis

I want to monitor the pool metrics in Spring-data-redis. JedisConnectionFactory's pool is private. How can I get it? I search google but I cannot find good way to do this.

Though Redis client use connection pool, most of time client use one connection to get/set of the data. Also, redis is single threaded application. You may not need connection pool information.

Related

Getting Aerospike timeout with multiple java client in application

Currently I am using Aerospike in my application.
I faced lots of timeout issues as shown below when I was creating new java client for each transaction and I was not closing it so number of connection ramp up dramatically.
Aerospike Error: (9) Client timeout: timeout=1000 iterations=1 failedNodes=0 failedConns=0
so to resolve this timeout issue,I didn't made any changes to client, read and write policy, I just created only one client, stored it's instance in some variable and used this same client for all transaction (get or put requests).
now I want to understand how moving from multiple client to one client resolved my timeout issue.
how these connection were not closing automatically.
The AerospikeClient constructor requests peers, partition maps and racks for all nodes in the cluster and initializes connection pools and async eventloops. This is an expensive process that is only meant to be performed once per cluster at application startup. AerospikeClient is thread-safe, so instances can be shared between threads.
If AerospikeClient close() is not called, connections residing in the pools (at least one connection pool per node) will not be closed. There are no finalize() methods in AerospikeClient.
The first transaction(s) usually need to create new connections. This adds to the latency and can cause timeouts.
The client does more than just the application's transactions. It also monitors the cluster for changes so that it can maintain one hop per transaction. Also, I believe when we initialize the client, we create an initial pool of sockets.
It is expected that most apps would only need one global client.

What's the proper way to check for service bus health through QueueClient?

I have a ASP.NET Core API that talks to an Azure ServiceBus through the QueueClient class.
The IQueueClient interface is registered as a singleton in the DI (new'ing it up once such as: new QueueClient(...). This is one of the approach in Microsofts recommended ways of talking to a SericeBus. We could use a MessageFactory too, but I don't think it matters for the question I have.
For monitoring purposes, i'd like to check the health of the service bus (or the service bus connection at least). I found that you can use queueClient.IsClosedOrClosing, but you can also use queueClient.ServiceBusConnection.IsClosedOrClosing. One checks the queueClient's connection to the service bus (?), and the other one... too?
What's the difference here?
Thanks for your help!
Both queueClient.IsClosedOrClosing and  queueClient.ServiceBusConnection.IsClosedOrClosing are the same. Any client, queue, topic, or subscription has and maintains a connection to the broker. This connection was either passed into the client at the time of construction or is created by the client when a connection string is given to the constructor. Since connection object is exposed on the clients, you get the access to the IsClosedOrClosing property in two ways.

How can I retrieve the content of a Redis channel at the time of subscribe?

When my web app subscribes to a Redis channel (mostly on Application_Start), it should automatically load the current channel content, but not wait for the next publish within this channel.
I couldn't find any way to achieve this - but as this "problem" appears to be so common and trivial I guess there must be an easy solution for this?
In the web app I'm using StackExchange.Redis (in case that's relevant). Who can help? Thx in advance!
The answer is no, there is no option to do this using Redis pub/sub functionality, Redis don't actually store messages which being published to the channel, so you can't retrieve them when you connect to channel.
Take a look at RabbitMQ with their persistent queues and message acknowledgements, which they have out of the box.
As there's obviously no comfortable option available in Redis, I'm now publishing the channel message also as a regular key-value. So the clients take it from key-value store before subscribing to the channel.

Using ServiceStack.Redis with RedisCloud

Using RedisCloud as a datastore for a ServiceStack based AppHarbor hosted app.
The RedisCloud .net client documentation states to not use the ServiceStack.Redis connection managers:
Note: the ServiceStack.Redis client connection managers (BasicRedisClientManager and PooledRedisClientManager) should be disabled when working with the Garantia Data Redis Cloud. Use the single DNS provided upon DB creation to access your Redis DB. The Garantia Data Redis Cloud distributes your dataset across multiple shards and efficiently balances the load between these shards.
Why would they suggest that? Because they are doing fancy load balancing stuff in their 'Garantia Data' layer and don't want to handle unnecessary connections? The RedisClient class is not thread-safe, so it makes it much more difficult from the application programming perspective.
Should I just ignore their instructions and use a PooledRedisClientManager? How would I configure it with the single uri that RedisCloud provides?
Or will I need to write a basic RedisClient pool wrapper that just creates new RedisClient connections as needed to handle concurrent access (i.e. ignores all read/write pooling specifics, hopefully delegating all that up-stream to the RedisCloud layer)?
Why would they suggest that? Because they are doing fancy load balancing stuff in their 'Garantia Data' layer and don't want to handle unnecessary connections?
I think you could be right. To my knowledge these classes simply wrap creating/retrieving instances of RedisClient (though, I think Basic always creates a new RedisClient). While I looked over their site, I did't see anything about 'max number of connections to the Redis server(s). The previous Redis vendor from AppHarbor (MyRedis) had plans that listed the number of max connections allowed per plan. However, I also didn't see anything on the site mention connection limits/handling.
Should I just ignore their instructions and use a PooledRedisClientManager? How would I configure it with the single uri that RedisCloud provides?
Well, if you do ignore their instructions my guess is you could eventually run into a 'max number of connections exceeded' error. That would make it difficult to get to your Redis Server(s). I think you could still use the BasicRedisClientManager because when you call GetClient() it always 'news up' a RedisClient in the same way shown in their example.

WCF NamedPipe: PerSession-Single or Singleton-Multiple

I'm building web application (in this context the client) which talk with a different process (in this context the server) through a namedpipe wcf service (WCF 4).
After reading many articles I was thinking to create a pool of proxy connected to the server (I've read it provide better performance) used in roundrobin.
Each call will be very short, on the server i need to reads and writes simple properties on few objects but this objects are shared so i must use locks in any case.
I expect very high concurrency.
Beacuse of the pool, the client will have N session always open with server.
I was wondering what should be the best settings for InstanceContext-ConcurrencyMode between PerSession-Single or SingleInstance-Multiple.
Thank You
My opinion: Do not use custom pool of proxies. Use build-in pooling of connections. You can't fully control connectionPooling in predefined bindings but you have full control in customBinding when using namedPipeTransport.
From implementation perspective in your client - use new proxy for each client's request. Don't share proxies among requests.