In the Quarkus documentation for reactive connections: https://quarkus.io/guides/datasource#quarkus-reactive-datasource_configuration, there are configuration for the maximum size of the pool "quarkus.datasource.reactive.max-size", but for the minimum size doesn't exists.
When I start my service, the pool starts with 4 connections.
How Can I change this configuration?
The javadoc of PoolOptions shows that there is no option for min-size.
Related
Currently I am using Aerospike in my application.
I faced lots of timeout issues as shown below when I was creating new java client for each transaction and I was not closing it so number of connection ramp up dramatically.
Aerospike Error: (9) Client timeout: timeout=1000 iterations=1 failedNodes=0 failedConns=0
so to resolve this timeout issue,I didn't made any changes to client, read and write policy, I just created only one client, stored it's instance in some variable and used this same client for all transaction (get or put requests).
now I want to understand how moving from multiple client to one client resolved my timeout issue.
how these connection were not closing automatically.
The AerospikeClient constructor requests peers, partition maps and racks for all nodes in the cluster and initializes connection pools and async eventloops. This is an expensive process that is only meant to be performed once per cluster at application startup. AerospikeClient is thread-safe, so instances can be shared between threads.
If AerospikeClient close() is not called, connections residing in the pools (at least one connection pool per node) will not be closed. There are no finalize() methods in AerospikeClient.
The first transaction(s) usually need to create new connections. This adds to the latency and can cause timeouts.
The client does more than just the application's transactions. It also monitors the cluster for changes so that it can maintain one hop per transaction. Also, I believe when we initialize the client, we create an initial pool of sockets.
It is expected that most apps would only need one global client.
I am unable to understand how the two attributes differ:'memoryLimit' and 'maxPageSize'
As per documentation:
'maxPageSize' = 'maximum number of persistent messages to page from store at a time'
'memoryLimit' - corresponds to the amount of memory that's assigned to the in-memory store for the queue
Here is a sample configuration for a queue :
<policyEntry queue="Consumer.normal.queue" producerFlowControl="true" memoryLimit="3200" maxPageSize="4"
maxBrowsePageSize="1000" prioritizedMessages="true" useCache="false" expireMessagesPeriod="0" queuePrefetch="1">
what I have observed is that if the maxPageSize =1 and memoryLimit = "3200" then I can see 2 messages loaded into memory and can be browsed via a jms client ( rest of the messages get stored in kahadb )
however if the maxPageSize = 4 and memoryLimit = "3200" then I can see 4 messages loaded into memory and can be browsed via a jms client
So are the two values meant to serve the same purpose ?
AND
does it mean that whichever of the these two attributes provides the greater number of messages will be used by activemq ?
maxPageSize determines how many messages ActiveMQ loads from the store (in your case, KahaDB) to hand to consumers. The memoryLimit indicates how much memory to allocate to keep messages in memory.
In short, (message size x maxPageSize <= memoryLimit) so that you do not hit producer flow control.
You want your page size to be much higher than 1 or 2 for ActiveMQ to perform (200 to 1000 to start). Numbers that low will have higher latency.
Note: Priority is an anti-pattern in distributed messaging at significant load (over 1M messages per day). It works well in a local embedded broker within your Java VM process. ActiveMQ disables it by default.
To enable priority support, update the <destinationPolicy queue=".." and add this attribute: prioritizedMessages="true"
I want to monitor the pool metrics in Spring-data-redis. JedisConnectionFactory's pool is private. How can I get it? I search google but I cannot find good way to do this.
Though Redis client use connection pool, most of time client use one connection to get/set of the data. Also, redis is single threaded application. You may not need connection pool information.
Whenever I am trying to deploy my application I keep getting this Exception in the logs:
MQJMSRA_LB4001: start:Aborted:Unable to ping Broker within 60000 millis
I couldn't understand why this was happening so I checked domains/domain1/imq/logs/log.txt and this is what I found:
No threads are available to process a new connection on service admin. 10 threads out of a maximum of 10 threads are already in use by other connections. A minimum of 2 threads must be available to process the connection. Please either limit the # of connections or increase the imq.<service>.max_threads property. Closing the new connection. ". Count: service=5 broker=5
Can someone help me with understanding how to increase this count..
I would really appreciate your help on this.
You should change the connection properties (max_threads) of the broker as the error message suggests. The broker configuration file is \domains\\imq\instances\imqbroker\props\config.properties.
This depends on whether you are using OpenMQ in embedded mode or not. If you are using embedded MQ, look for the Thread Pools section of your config in the admin console. One of them will have a max threads set to 10, that will be the one to increase.
It's hard to be sure since you haven't given any other detail from the logs, but that is very likely what you need to change.
Can we configure ActiveMQ to send only one message per an instance of application ?
Actually i have tomcat installed in a cluster mode.
I'm using Spring JMS template as consumer.
You need to explain your question further; it's not clear what you are asking.
If you are talking about prefetch, IIRC ActiveMQ sets the prefetch to 1000 by default; set it to 0 to force messages to be distributed across all instances (at the cost of performance). Typically you will want to use prefetch, but you need to tune it for your needs.
Set the maxConcurrentConsumers property to 1. This should make it so that only one thread consumes from the queue per node.