What is the maximum number of xxx of db-xxx in kahaDB (ActiveMQ)? - activemq

I'm looking for the maximum number of xxx of db-xxx.log in kahaDB (ActiveMQ).
I set the db file configuration in activemq.xml for a production site as below.
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb" journalMaxFileLength="32mb" cleanupInterval="5000"/>
</persistenceAdapter>
The remained kahadb files currently are db-5.log and db-6.log, but I'm not sure the maximum index number (xxx of db-xxx.log) and the behavior when reached the maximum number like db-9999999.log.

The db file number is a Java int value. Therefore, the theoretical max number of db-*.log files is Integer.MAX_VALUE or 2,147,483,647.

Related

Why flink container vcore size is always 1

I am running flink on yarn(more precisely in AWS EMR yarn cluster).
I read flink document and source code that by default for each task manager container, flink will request the number of slot per task manager as the number of vcores when request resource from yarn.
And I also confirmed from the source code:
// Resource requirements for worker containers
int taskManagerSlots = taskManagerParameters.numSlots();
int vcores = config.getInteger(ConfigConstants.YARN_VCORES,
Math.max(taskManagerSlots, 1));
Resource capability = Resource.newInstance(containerMemorySizeMB,
vcores);
resourceManagerClient.addContainerRequest(
new AMRMClient.ContainerRequest(capability, null, null,
priority));
When I use -yn 1 -ys 3 to start flink, I assume yarn will allocate 3 vcores for the only task manager container, but when I checked the number of vcores for each container from yarn resource manager web ui, I always see the number of vcores is 1. I also see vcore to be 1 from yarn resource manager logs.
I debugged the flink source code to the line I pasted below, and I saw value of vcores is 3.
This is really confuse me, can anyone help to clarify for me, thanks.
An answer from Kien Truong
Hi,
You have to enable CPU scheduling in YARN, otherwise, it always shows that only 1 CPU is allocated for each container,
regardless of how many Flink try to allocate. So you should add (edit) the following property in capacity-scheduler.xml:
<property>
<name>yarn.scheduler.capacity.resource-calculator</name>
<!-- <value>org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator</value> -->
<value>org.apache.hadoop.yarn.util.resource.DominantResourceCalculator</value>
</property>
ALso, taskManager memory is, for example, 1400MB, but Flink reserves some amount for off-heap memory, so the actual heap size is smaller.
This is controlled by 2 settings:
containerized.heap-cutoff-min: default 600MB
containerized.heap-cutoff-ratio: default 15% of TM's memory
That's why your TM's heap size is limitted to ~800MB (1400 - 600)
#yinhua.
Use the command to start a session:./bin/yarn-session.sh, you need add -s arg.
-s,--slots Number of slots per TaskManager
details:
https://ci.apache.org/projects/flink/flink-docs-release-1.4/ops/deployment/yarn_setup.html
https://ci.apache.org/projects/flink/flink-docs-release-1.4/ops/cli.html#usage
I get the answer finally.
It's because yarn is use "DefaultResourceCalculator" allocation strategy, so only memory is counted for yarn RM, even if flink requested 3 vcores, but yarn simply ignore the cpu core number.

Selenoid: What does the count attribute do in a quota file?

I started Selenoid with docker: aerokube/cm:latest selenoid start --args "-limit 20"
I then created a quota file with:
user.xml:
<qa:browsers xmlns:qa="urn:config.gridrouter.qatools.ru">
<browser name="chrome" defaultVersion="62.0">
<version number="62.0">
<region name="1">
<host name="1.2.3.4" port="4445" count="10"/>
</region>
</version>
</browser>
</qa:browsers>
When I run with this user it runs 20 in parallel. I thought count="10" would mean this user can do at most 10 in parallel. And -limit 20 was the max for the VM. Is this the correct usage of count?
In fact count field in Ggr quota XML file means host weight. It makes sense when two or more hosts are present in quota. This attribute is called so for historical reasons. So when you have e.g. two hosts in quota with counts 1 and 3 then sessions will be distributed as 1:3 over these hosts. When counts are equal then distribution should be random uniform. If you set count equal to the real count of browsers for each host - then you also get random uniform distribution. This is what we are recommend to do in production.

SAP Pool capacity and peak limit-Mule esb

Hi am having an SAP connector and SAP is sending 10,000 idocs parallel what can be the good number I can give in Pooling capacity and Peak limit,Now I have given 2000 and 1000 respectively any suggestion for better performance
<sap:connector name="SAP" jcoAsHost="${saphost}" jcoUser="${sapuser}" jcoPasswd="${sappassword}" jcoSysnr="${sapsystemnumber}" jcoClient="${sapclient}" jcoLang="${saploginlanguage}" validateConnections="true" doc:name="SAP" jcoPeakLimit="1000" jcoPoolCapacity="2000"/>
When dealing with big amounts of IDocs, the first thing I recommend to improve performance is to configure the SAP system to send IDocs in batches instead of sending them individually. That is, the IDocs are sent in groups of X number you define as a batch size in the Partner Profile section of SAPGUI. Among other settings, you have to set "Pack. Size" and select "Collect IDocs" as Output Mode.
If you are not familiar with SAPGUI, request your SAP Admin to configure it for you.
Aditionally, to extract the most out of the connection pooling from the SAP connector, I suggest you use SAP Client Extended Properties to get full advantage of JCo additional connection parameters. These extended properties are defined in a Spring Bean and set in jcoClientExtendedProperties at connector or endpoint level. Take a look at the following example:
<spring:beans>
<spring:bean name="sapClientProperties" class="java.util.HashMap">
<spring:constructor-arg>
<spring:map>
<!-- Maximum number of active connections that can be created for a destination simultaneously -->
<spring:entry key="jco.destination.peak_limit" value="15"/>
<!-- Maximum number of idle connections kept open by the destination. A value of 0 has the effect that there is no connection pooling, i.e. connections will be closed after each request. -->
<spring:entry key="jco.destination.pool_capacity" value="10"/>
<!-- Time in ms after that the connections hold by the internal pool can be closed -->
<spring:entry key="jco.destination.expiration_time" value="300000"/>
<!-- Interval in ms with which the timeout checker thread checks the connections in the pool for expiration -->
<spring:entry key="jco.destination.expiration_check_period" value="60000"/>
<!-- Max time in ms to wait for a connection, if the max allowed number of connections is allocated by the application -->
<spring:entry key="jco.destination.max_get_client_time" value="30000"/>
</spring:map>
</spring:constructor-arg>
</spring:bean>
</spring:beans>
<sap:connector name="SAP"
jcoAsHost="${sap.jcoAsHost}"
jcoUser="${sap.jcoUser}"
jcoPasswd="${sap.jcoPasswd}"
jcoSysnr="${sap.jcoSysnr}"
jcoClient="${sap.jcoClient}"
...
jcoClientExtendedProperties-ref="sapClientProperties" />
Important: to enable the pool it is mandatory that you set the jcoExpirationTime additionally to the Peak Limit and Pool Capacity.
For further details, please refer to SAP Connector Advanced Features documentation.

Aerospike: Failed to store record. Error: (13L, 'AEROSPIKE_ERR_RECORD_TOO_BIG', 'src/main/client/put.c', 106)

I get the following error while storing the data to aerospike ( client.put ). I have enough space on the drive.
Aerospike: Failed to store record. Error: (13L, 'AEROSPIKE_ERR_RECORD_TOO_BIG', 'src/main/client/put.c', 106).
Here is my Aerospike server namespace configuration
namespace test {
replication-factor 1
memory-size 1G
default-ttl 30d # 30 days, use 0 to never expire/evict.
storage-engine device {
file /opt/aerospike/data/test.dat
filesize 2G
data-in-memory true # Store data in memory in addition to file.
}
}
By default namespaces have a write-block-size of 1 MiB. This is also the maximum configurable size and will limit the max object size the application is able to write to Aerospike.
If you need to go beyond 1 MiB see Large Data Types as a possible solution.
UPDATE 2019/09/06
Since Aerospike 3.16, the write-block-size limit has been increased from 1 MiB to 8 MiB.
Yes, but unfortunately, Aerospike has deprecated LDT (https://www.aerospike.com/blog/aerospike-ldt/). They now recommend to use Lists or Maps, but as stated in their post:
"the new implementation does not solve the problem of the 1MB Aerospike database row size limit. A future key feature of the product will be an enhanced implementation that transcends the 1MB limit for a number of types"
In other terms, it is still an unsolved problem when storing your data on SSD or HDD. However, you can store larger data on memory namespaces.

Maximum number of messages sent to a Queue in OpenMQ?

I am currently using Glassfish v2.1 and I have set up a queue to send and receive messages from with Sesion beans and MDBs respectively. However, I have noticed that I can send only a maximum of 1000 messages to the queue. Is there any reason why I cannot send more than 1000 messages to the queue? I do have a "developer" profile setup for the glassfish domain. Could that be the reason? Or is there some resource configuration setting that I need to modify?
I have setup the sun-resources.xml configuration properties as follows:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE resources PUBLIC "-//Sun Microsystems, Inc.//DTD Application Server 9.0 Resource Definitions //EN" "http://www.sun.com/software/appserver/dtds/sun-resources_1_3.dtd">
<resources>
<admin-object-resource
enabled="true"
jndi-name="jms/UpdateQueue"
object-type="user"
res-adapter="jmsra"
res-type="javax.jms.Queue">
<description/>
<property name="Name" value="UpdatePhysicalQueue"/>
</admin-object-resource>
<connector-resource
enabled="true" jndi-name="jms/UpdateQueueFactory"
object-type="user"
pool-name="jms/UpdateQueueFactoryPool">
<description/>
</connector-resource>
<connector-connection-pool
associate-with-thread="false"
connection-creation-retry-attempts="0"
connection-creation-retry-interval-in-seconds="10"
connection-definition-name="javax.jms.QueueConnectionFactory"
connection-leak-reclaim="false"
connection-leak-timeout-in-seconds="0"
fail-all-connections="false"
idle-timeout-in-seconds="300"
is-connection-validation-required="false"
lazy-connection-association="false"
lazy-connection-enlistment="false"
match-connections="true"
max-connection-usage-count="0"
max-pool-size="32"
max-wait-time-in-millis="60000"
name="jms/UpdateFactoryPool"
pool-resize-quantity="2"
resource-adapter-name="jmsra"
steady-pool-size="8"
validate-atmost-once-period-in-seconds="0"/>
</resources>
Hmm .. further investigation revealed the following in the imq logs:
[17/Nov/2009:10:27:57 CST] ERROR sendMessage: Sending message failed. Connection ID: 427038234214377984:
com.sun.messaging.jmq.jmsserver.util.BrokerException: transaction failed: [B4303]: The maximum number of messages [1,000] that the producer can process in a single transaction (TID=427038234364096768) has been exceeded. Please either limit the # of messages per transaction or increase the imq.transaction.producer.maxNumMsgs property.
So what would I do if I needed to send more than 5000 messages at a time?
What I am trying to do is to read all the records in a table and update a particular field of each record based on the corresponding value of that record in a legacy table to which I have only read only access. This table has more than 10k records in it. As of now, I am sequentially going through each record in a for loop, getting the corresponding record from the legacy table, comparing the field values, updating the record if necessary and adding corresponding new records in other tables.
However, I was hoping to improve performance by processing all the records asynchronously. To do that I was thinking of sending each record info as a separate message and hence requiring so many messages.
To configure OpenMQ and set artitrary broker properties, have a look at this blog post.
But actually, I wouldn't advice to increase the imq.transaction.producer.maxNumMsgs property, at least not above the value recommended in the documentation:
The maximum number of messages that a producer can process in a single transaction. It is recommended that the value be less than 5000 to prevent the exhausting of resources.
If you need to send more messages, consider doing it in several transactions.