Redis reddison client slow put - redis

Redisson client when doing a put or set operation always gets a connection first. Is there a way to reuse the same connection and reduce the cost of operations

Redisson client when doing a put or set operation always gets a connection first
It uses pooled connections. Never creates it each time. During first operation codec may require warmup. You can try to use simple StringCodec.

I experimented with jedis client that reuses the same connection. I am using that for now.

Related

Do I need a new client connection when using redis transactions?

My application uses a singleton connection of redis everywhere, it's initialized at the startup.
My understanding of MULTI.EXEC() tells that all my WATCHed keys would be UNWATCHed when the MULTI.EXEC() is called anywhere in the application.
This would mean that all keys WATCHed irrespective of which MULTI block they were WATCHed for will be unwatched, beating the whole purpose of WATCHing them.
Is my understanding correct?
How do I avoid this situation, should I create a new connection for each transaction?
This process happened inside Redis Server and will block all incoming command. So it doesn't matter if you use single or multiple connections(all connections will be blocked)

Is there any advantage of Redis connection pool over single connection?

I am trying to use Redis for my application. In my application multiple user at the same time want to get the data which is stored in redis cache . As Redis is Single-threaded & we can't run operation concurrently against a single server, by using connection pool can we execute multiple commands at a single time to achieve the high throughput ?
I have read some of the articles in open forum & they said it would be helpful using connection pooling only when we know we are going to use redis blocking operation some time like BLPOP. But, if we are sure that we are never going to use blocking operations & use only normal operation like SET,MSET, GET & MGET then using connection pooling having any advantage over single connection ?
Also can anybody have any idea/recommendation about the maximum no of keys we will need to provide in MGET command in order to get the maximum values of all specified keys ?
It will be very helpful if I get answers to this. Thanks in advance.

how to deal with read() timeout in Redis client?

Assume that my client send a 'INCR' command to redis server, but the response packet is lost, so my client's read() will times out, but client is not able to tell if INCR operation has been performed by server.
what to do next? resending INCR or continuing next command? If client resends INCR, but in case redis had carried out INCR in server side before, this key will be increased two times, which is not what we want.
This is not a problem specific to Redis: it also applies to any other data stores (including transactional ones). There is no solution to this problem: you can only hope to minimize the issue.
For instance, some people tend to put very aggressive values for their timeout thinking that Redis is supposed to be a soft real-time data store. Redis is fast, but you also need to consider the network, and the system itself. Network related problems may generate high latencies. If the system starts swapping, it will very seriously impact Redis response times.
I tend to think that putting a timeout under 2 secs is a nonsense on any Unix/Linux system, and if a network is involved, I am much more comfortable with 10 secs. People put very low values because they want to avoid their application to block: it is a mistake. Rather than setting very low timeouts and keep the application synchronous, they should design the application to be asynchronous and set sensible timeouts.
After a timeout, a client should never "continue" with the next command. It should close the connection, and try to open a new one. If a reply (or a query) has been lost, it is unlikely that the client and the server can resynchronize. It is safer to close the connection.
Should you try to issue the INCR again after the reconnection? It is really up to you. But if a read timeout has just been triggered, there is a good chance the reconnection will time out as well. Redis being single-threaded, when it is slow for one connection, it is slow for all connections simultaneously.

What is the difference between activemqconnectionfactory and pooledconnectionfactory?

As the title said, What is their difference and how to make a choice?
I wonder is there something same as each other?
If I want to make a keep-alive connection? That is said once I connect to activemq server,
I can using the connection to send/receive message whenever I want. I think I can call it
daemonProducer or daemonConsumer. Can activemq implement this?
The ActiveMQConnectionFactory creates ActiveMQ Connections. The PooledConnectionFactory pools Connections. If you only need to create one Connection and keep it around for a long time you don't need to pool. If you tend to create many Connection instances over time then Pooling is better as connecting is a heavy operation and can be a performance bottleneck.

RabbitMQ create connection is an expensive operation

is creating a connection in RabbtiMQ .net client an expensive operation?
We have a Web application that publishes message to the RMQ, and currently we create and close the connection on every publish.
Yes, creating connection is expensive. Why are you creating and closing connection to rabbit anyway after publish?
I would suggest create rabbit connection once, then close it whenever you need to.
If you maintained a single connection it would be faster to send messages, as you would only need one operation to send a message. Opening a connection each time uses io resources so is bound to be a little slower.