Jgroup and weblogic: Sender not in table - weblogic

I'm trying to implement ehcache replication for my application. Following are the jar versions :
ehcache-jgroupsreplication:1.7
ehcache-core 2.5.2
jgroups 3.1.0
When starting my application, getting following line in server logs:
GMS: address=ABC111-33601, cluster=EH_CACHE, physical address=10.x.x.xx:1123
And getting the following warning in application logs:
ABC111-33601: dropped message 1 from ABC222-40262 (sender not in table [ABC111-33601]), view=[ABC111-33601|0] [ABC111-33601]
The echache.xml is:
<?xml version="1.0" encoding="UTF-8"?>
<ehcache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="http://ehcache.org/ehcache.xsd"
updateCheck="false">
<diskStore path="java.io.tmpdir"/>
<cacheManagerPeerProviderFactory
class="net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProviderFactory"
properties="connect=TCP(bind_port=1123):
TCPPING(initial_hosts=ABC111[1123],ABC222[1123],ABC333[1123];port_range=10;timeout=3000;num_initial_members=4):
VERIFY_SUSPECT(timeout=1500):
pbcast.NAKACK(use_mcast_xmit=false;use_mcast_xmit_req=false;retransmit_timeout=3000):
pbcast.GMS(join_timeout=5000):
FRAG2(frag_size=60K)"
propertySeparator="::" />
<defaultCache
maxElementsInMemory="1000"
eternal="false"
timeToIdleSeconds="120"
timeToLiveSeconds="120"
overflowToDisk="false"
diskPersistent="false"
diskExpiryThreadIntervalSeconds="120">
</defaultCache>
<cache name="com.abc.tariff"
maxElementsInMemory="1000"
eternal="false"
overflowToDisk="false"
timeToIdleSeconds="1800"
timeToLiveSeconds="1800">
<cacheEventListenerFactory
class="net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicatorFactory"
properties="replicateAsynchronously=true, replicatePuts=true, replicateUpdates=true, replicateUpdatesViaCopy=false, replicateRemovals=true" />
</cache>
<cache name="com.abc.customer"
maxElementsInMemory="1000"
eternal="false"
overflowToDisk="false"
timeToIdleSeconds="120"
timeToLiveSeconds="180">
<cacheEventListenerFactory
class="net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicatorFactory"
properties="replicateAsynchronously=true, replicatePuts=true, replicateUpdates=true, replicateUpdatesViaCopy=false, replicateRemovals=true" />
</cache>
</ehcache>
ABC111, ABC222 and ABC333 are not in weblogic cluster.
Any idea why the warning is coming and my guess is that the replication has not started due to this or has it?

Your cluster has not formed.
The warning says that you received a message from ABC222 which claimed to be in the same cluster but wasn't in your view of the cluster.
Your config looks weird anyway, for instance, UNICAST is missing, you have no failure detection protocols, no merge protocols etc. Is this the default JGroups config ehcache ships with? That would be very wrong!
You can use probe (check the JGroups manual for details) to find out if the cluster formed correctly. My guess is that adding a correct bind_addr to TCP would fix the issue here...

After your suggestion I have changed the settings like following:
<cacheManagerPeerProviderFactory
class="net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProviderFactory"
properties="connect=TCP(bind_port=1123;bind_addr=10.x.x.58):
TCPPING(initial_hosts=10.x.x.58[1123],10.x.x.59[1123];port_range=10;timeout=3000;num_initial_members=2;break_on_coord_rsp=true):
MERGE2(min_interval=10000;max_interval=30000):
FD_SOCK:
FD(timeout=3000;max_tries=3):
VERIFY_SUSPECT(timeout=1500):
BARRIER:
pbcast.NAKACK2(use_mcast_xmit=false;discard_delivered_msgs=true):
UNICAST:
pbcast.STABLE(stability_delay=1000;desired_avg_gossip=50000;max_bytes=4M):
pbcast.GMS(print_local_addr=true;join_timeout=5000;view_bundling=true):
UFC(max_credits=2M;min_threshold=0.4):
MFC(max_credits=2M;min_threshold=0.4):
FRAG2(frag_size=60K):
pbcast.STATE_TRANSFER"
propertySeparator="::" />
Also added following to weblogics arguments:
-Djava.net.preferIPv4Stack=true -Djgroups.resolve_dns=true -Djgroups.bind_addr=10.x.x.58 -Djgroups.tcpping.initial_hosts=10.x.x.58[1123],10.x.x.59[1123]
Tried runningg the prob as well :
java -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv6Addresses=false -classpath -cp E:/jgroup/jgroups-3.1.0.Final.jar org.jgroups.tests.Probe
-- send probe on /224.0.75.75:7500
1 (217 bytes):
local_addr=ABC111-65460 [XX-78af-fb20-ae58-XX]
cluster=EH_CACHE
physical_addr=ABC111.qaoneadr.local:1123
view=[ABC222-23806|1] [ABC222-23806, ABC111-65460]
version=3.1.0.Final
2 (247 bytes):
local_addr=ap-insight2t-47964 [bbc6c770-e344-ceaa-18a9-f690284ca154]
cluster=OSCacheBus_insight_II_Insight_SITE_cluster
view=[ap-insight3t-30967|697] [ap-insight3t-30967, ap-insight2t-47964]
physical_addr=10.XX.XX.32:7900
version=3.3.5.Final
3 (217 bytes):
local_addr=ABC222-23806 [XX-d700-a732-227a-XX]
cluster=EH_CACHE
physical_addr=ABC222.qaoneadr.local:1123
view=[ABC222-23806|1] [ABC222-23806, ABC111-65460]
version=3.1.0.Final
3 responses (3 matches, 0 non matches)
But still getting :
ABC111-65460: dropped message 1 from ABC222-23806 (sender not in table [ABC111-65460]), view=[ABC111-65460|0] [ABC111-65460]

Related

ServerFailureTriggerMBean.MaxStuckThreadTime & ServerFailureTriggerMBean.StuckThreadCount strange behaviour

I'm facing a strange behaviour with some parameters in weblogic.
I have a J2EE batch which is executed during more than 10 minutes in a weblogic server which cause an exception like
com.ibm.jbatch.container.exception.BatchContainerRuntimeException:
java.lang.InterruptedException
After some investigation, I found that the property MaxStuckThreadTime is set to 600 seconds (default value) and the property StuckThreadCount is set to 25 (was 0 in the past without any issue).
If I understand well, this means, the server should fail if and only if at least 25 threads are busy since more than 600seconds.
But I have maximum 10 threads running at the same time on the server.
I made some test on my dev environement and as soon as I have one thread stuck (busy during 10 minutes, the interruped exception is launched), is-it the expected behaviour?
I don't have the right to modify those value on production.
So, any idea is welcome to by pass this kind of error.
In the documentation, I found :
StuckThreadCount = The number of stuck threads after which the server is transitioned into FAILED state.
MaxStuckThreadTime = Sets the value of the MaxStuckThreadTime attribute.
So, in my point of view, the interupted excpetion, should only appears if the 2 conditions are field-in, but i have the impression that only one stuck thread is enough to interupt the batch.
Am-i correct if I say that the MaxStuckThreadTime is only taken into account if the StuckThreadCount is different than 0?
Thanks in advance for your help
edit :
I tried to implement the proposal here under but until now, without success.
So, in my weblogic-ejb-jar.xml, I've added the following code :
<work-manager>
<name>BatchWorkManager</name>
<ignore-stuck-threads>true</ignore-stuck-threads>
</work-manager>
<managed-executor-service>
<name>batch-job-executor</name>
<dispatch-policy>BatchWorkManager</dispatch-policy>
<long-running-priority>10</long-running-priority>
</managed-executor-service>
and in my batch, I added
#Resource(name = "BatchWorkManager")
WorkManager myMW;
and the call to my batch like this
#Override
public String process() throws Exception {
myWM.schedule(new MyWork("MyBatchName"));
return BatchStatus.COMPLETED.toString();
}
After a few minutes (defined in the MaxStuckThreadTime parameter), the job is put on status failed.
If I debug the code, I see the value of the workmanager :
stuckThreadActions = null name = "NO STUCK THREAD ACTIONS !"
stuckThreads = {BitSet#36226} "{}"
It seems, the workmanager is correctly setup (NO STUCK THREAD ACTIONS ! is what I want).
So, I still don't understand, why the batch is failing ...
Any help is welcome.
For information, the statcktrace I receive :
###<Apr 21, 2022, 12:40:00,793 PM CEST> <com.ibm.jbatch.container.impl.BatchletStepControllerImpl>
<[STUCK] ExecuteThread: '0' for queue:
'weblogic.kernel.Default (self-tuning)'> <>
<33ef2b10-13cc-45be-bf47-e06daf40042c-0000003b> <1650537600793>
<[severity-value: 16] [rid: 0:1] [partition-id: 0] [partition-name:
DOMAIN] > <Caught exception executing step:
com.ibm.jbatch.container.exception.BatchContainerRuntimeException:
java.lang.InterruptedException at
com.ibm.jbatch.container.impl.PartitionedStepControllerImpl.executeAndWaitForCompletion(PartitionedStepControllerImpl.java:407)
at
com.ibm.jbatch.container.impl.PartitionedStepControllerImpl.invokeCoreStep(PartitionedStepControllerImpl.java:297)
at
com.ibm.jbatch.container.impl.BaseStepControllerImpl.execute(BaseStepControllerImpl.java:144)
at
com.ibm.jbatch.container.impl.ExecutionTransitioner.doExecutionLoop(ExecutionTransitioner.java:112)
at
com.ibm.jbatch.container.impl.JobThreadRootControllerImpl.originateExecutionOnThread(JobThreadRootControllerImpl.java:110)
at
com.ibm.jbatch.container.util.BatchWorkUnit.run(BatchWorkUnit.java:80)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at weblogic.work.concurrent.TaskWrapper.call(TaskWrapper.java:151)
at
weblogic.work.concurrent.future.AbstractFutureImpl.runTask(AbstractFutureImpl.java:391)
at
weblogic.work.concurrent.future.AbstractFutureImpl.doRun(AbstractFutureImpl.java:436)
at
weblogic.work.concurrent.future.ManagedFutureImpl.run(ManagedFutureImpl.java:28)
at
weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:348)
at
weblogic.invocation.ComponentInvocationContextManager.runAs(ComponentInvocationContextManager.java:333)
at
weblogic.work.LivePartitionUtility.doRunWorkUnderContext(LivePartitionUtility.java:54)
at
weblogic.work.PartitionUtility.runWorkUnderContext(PartitionUtility.java:41)
at
weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:640)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:406) at
weblogic.work.ExecuteThread.run(ExecuteThread.java:346) Caused by:
java.lang.InterruptedException at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2048)
at
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at
com.ibm.jbatch.container.impl.PartitionedStepControllerImpl.executeAndWaitForCompletion(PartitionedStepControllerImpl.java:402)
... 17 more
You could configure a new work manager for running the batch job and configure stuck threads to be ignored, or launch the batch job as a long running request.
A work manager can be configured globally via the weblogic console, or locally for each deployed application. To define a work manager in an application, you can configure it in the weblogic.xml (or equivalent for ear files) packaged up with your deployment. For example, i have this in my weblogic.xml file to define a work manager that ignores stuck threads...
<?xml version="1.0" encoding="UTF-8"?>
<weblogic-web-app xmlns="http://xmlns.oracle.com/weblogic/weblogic-web-app" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.oracle.com/weblogic/weblogic-web-app http://xmlns.oracle.com/weblogic/weblogic-web-app/1.4/weblogic-web-app.xsd">
...
<work-manager>
<name>batch-job-wm</name>
<max-threads-constraint>
<name>batch-job-max-threads</name>
<count>10</count>
</max-threads-constraint>
<ignore-stuck-threads>true</ignore-stuck-threads>
</work-manager>
<managed-executor-service>
<name>batch-job-executor</name>
<dispatch-policy>batch-job-wm</dispatch-policy>
<long-running-priority>10</long-running-priority>
<max-concurrent-long-running-requests>10</max-concurrent-long-running-requests>
</managed-executor-service>
<resource-env-description>
<resource-env-ref-name>concurrent/batch-job-executor</resource-env-ref-name>
<resource-link>batch-job-executor</resource-link>
</resource-env-description>
...
</weblogic-web-app>
I reference that managed-executor-service in my web.xml...
<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/javaee
http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd" version="3.0">
...
<resource-env-ref>
<resource-env-ref-name>concurrent/batch-job-executor</resource-env-ref-name>
<resource-env-ref-type>javax.enterprise.concurrent.ManagedExecutorService</resource-env-ref-type>
</resource-env-ref>
</web-app>
In my web application, I can then access that task executor as follows...
#Configuration
public class ResourceConfig {
#Bean
public TaskExecutor batchTaskExecutor() {
DefaultManagedTaskExecutor taskExecutor = new DefaultManagedTaskExecutor();
taskExecutor.setJndiName("java:comp/env/concurrent/batch-job-executor");
return taskExecutor;
}
}
When launching a batch job using that work manager, any stuck threads are ignored by weblogic and the servers show as healthy even for long running tasks.
An enhancement to this is to have the batch job launched as a long running task. I think this will cause weblogic to create a new thread for the task instead of taking a thread from the work manager thread pool. Also weblogic won't consider a thread assigned to a long running task as being stuck.
To launch a long running task, you need to set the LONGRUNNING_HINT to true in the ManagedTask that is launched. For more details see the following...
https://docs.oracle.com/javaee/7/api/javax/enterprise/concurrent/ManagedTask.html#LONGRUNNING_HINT
https://docs.oracle.com/javaee/7/api/javax/enterprise/concurrent/ManagedExecutorService.html
https://blogs.oracle.com/weblogicserver/post/concurrency-utilities-support-in-weblogic-server-1221-part-one-managedexecutorservice

Hawtio ActiveMQ queue Browse shows maximum 500 messages

I'm trying to see all the messages in my queue in ActiveMQ(5.11.1). I am using Hawtio(1.4.51) for this purpose. My queue in ActiveMQ contains 790 message.
My Steps till now:
By default hawtio shows up to 400 Messages in ActiveMQ queue. So i went to my broker.xml settings and added:
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry queue="incoming.status" maxBrowsePageSize="401"/>
</policyEntries>
</policyMap>
</destinationPolicy>
This gave me 401 messages.
So i tried to change the maxBrowsePageSize="401" to "-1". To my surprise i got only 200 messages...
Next try was to set maxBrowsePageSize="1000" and again dissapointement. I could see only 500 messages...
Next i went to my java code and inserted:
PrintWriter writer = new PrintWriter("c:\\Messages.log", "UTF-8");
writer.write(jmsQueueEndpoint.browseAllMessagesAsXml(true));
writer.close();
The results were: for maxBrowsePageSize="401" i got 401/790 messages, for "2" i got 2/790 for "1000" and for "-1" i got 790/790.
So my conclusion was that there is some setting in Hawtio that limits my results to 500.
I need to see ALL my messages in Hawtio.
So after more investigation, and with a help of this blog: HawtIO + Camel plugin - Multiple context not showing up - Limits to max3
I was able to find the setting which will allow ActiveMQ in Hawtion to show more than 500 entries. The setting located here:
In the right side of hawtio application there is your user picture with a small arrow. Press on it and select "Preferences".
In "Preferences" select "Jolokia.
In "Jolokia" edit: "Max collection size" to maximum that you want and press "apply", restart browser.
The only problem left is the unlimited option. When i set "-1" in the broker part, hawtio limits me to 200 entries...

Voldemort replication and failover details

I'm evaluating Voldemort and encountered some confusing stuff related to replication and failover. I tried to make a simple 2 nodes cluster configuration where each node is a backup for another one. So data written to node 1 should be replicated to node 2 and vice versa. In case of node 1 failover the second node should serve client requests. After node 1 recovery data should be transfered back to node 1. I think it's very common and clear case. So I made the following configuration.
<cluster>
<name>perf_cluster</name>
<server>
<id>0</id>
<host>10.50.3.156</host>
<http-port>8081</http-port>
<socket-port>6666</socket-port>
<admin-port>6667</admin-port>
<partitions>0, 1, 2, 3</partitions>
<zone-id>0</zone-id>
</server>
<server>
<id>1</id>
<host>10.50.3.157</host>
<http-port>8081</http-port>
<socket-port>6666</socket-port>
<admin-port>6667</admin-port>
<partitions>4, 5, 6, 7</partitions>
<zone-id>0</zone-id>
</server>
</cluster>
<stores>
<store>
<name>perftest</name>
<persistence>memory</persistence>
<description>Performance Test store</description>
<owners>owner</owners>
<routing>client</routing>
<replication-factor>2</replication-factor>
<required-reads>1</required-reads>
<required-writes>1</required-writes>
<key-serializer>
<type>string</type>
</key-serializer>
<value-serializer>
<type>java-serialization</type>
</value-serializer>
</store>
</stores>
I perform the following test:
Start both nodes;
Connect cluster via shell using 'bin/voldemort-shell.sh perftest tcp://10.50.3.156:6666';
Put the key-value "1" "a";
Perform 'preflist "1"' which returns me 'Node 1' 'Node 0' so I assume that 'get' request will be sent to Node 1 first;
Crash Node 1;
Get key "1". I see some errors related to loss of connectivity but finally it returns me correct value;
Start Node 1;
Get key "1". It says that Node 1 is available but returns me 'null' instead of the value. So I assume the Node 1 didn't get the data from Node 0 and since my required-reads = 1 it doesn't ask for Node 0 and returns me null.
Crash Node 0;
Key "1" is lost forever because it wasn't replicated to Node 1.
I'm more than sure that I misunderstand something in configuration or cluster replication details. Could you clarify why the data doesn't replicate back from Node 0 to Node 1 after recovery? And am I right that replication is a client responsibility, not server? If so how should the data be replicated after Node recovery?
Thanks in advance.
I don't known if you've already solved the problem, but take a look to: http://code.google.com/p/project-voldemort/issues/detail?id=246
Remember that the memory store is only for testing (junit) purposes, you should use the readonly or bdb stores.

NHIbernate SysCache2 and SQLDependency problems

I've set enable_broker on my SQL Server 2008 to use SQLDepndency
I've configured my .Net app to use Syscache2 with a cache region as follows:
<syscache2>
<cacheRegion name="BlogEntriesCacheRegion" priority="High">
<dependencies>
<commands>
<add name="BlogEntries"
command="Select EntryId from dbo.Blog_Entries where ENABLED=1"
/>
</commands>
</dependencies>
</cacheRegion>
</syscache2>
My Hbm file looks like this:
<?xml version="1.0" encoding="utf-8"?>
<hibernate-mapping xmlns="urn:nhibernate-mapping-2.2">
<class name="BlogEntry" table="Blog_Entries">
<cache usage="nonstrict-read-write" region="BlogEntriesCacheRegion"/>
....
</class>
</hibernate-mapping>
I also have query caching enabled for queries against BlogEntry
When I first query, the results are cached in the 2nd level cache, as expected.
If I now go and change a row in blog_entries, everything works as expected, the cache is expired, it get's this message:
2010-03-03 12:56:50,583 [7] DEBUG NHibernate.Caches.SysCache2.SysCacheRegion - Cache items for region 'BlogEntriesCacheRegion' have been removed from the cache for the following reason : DependencyChanged
I expect that. On the next page request, the query and it's results are stored back in the cache. However, the cache is immediately invalidated again, even though nothing has further changed.
DEBUG NHibernate.Caches.SysCache2.SysCacheRegion - Cache items for region 'BlogEntriesCacheRegion' have been removed from the cache for the following reason : DependencyChanged
My cache is constantly invalidated every subsequent time with no changes to the underlying data. Only a restart of the application allows the cache to operate again - but only the first time the data is cached (again, the first dirtying of the cache, causes it to never work again)
Has anyone seen this problem or got any ideas what this could be? I was thinking that syscache2 needs to handle the SQLDependency onChange event, which it probably is doing - so I don't understand why SQL Server keeps sending SQLDependency depedencyChanged.
thanks
We are getting the same problem on one database instance, but not on the other. It definitely seems to be some kind of permission problem on the database end, because the exact same NHibernate configuration is used in both cases.
In the working case the cache behaves as expected, in the other (which is a database engine which has much stricter permissions) we get the exact same behaviour you mentioned.

Maximum number of messages sent to a Queue in OpenMQ?

I am currently using Glassfish v2.1 and I have set up a queue to send and receive messages from with Sesion beans and MDBs respectively. However, I have noticed that I can send only a maximum of 1000 messages to the queue. Is there any reason why I cannot send more than 1000 messages to the queue? I do have a "developer" profile setup for the glassfish domain. Could that be the reason? Or is there some resource configuration setting that I need to modify?
I have setup the sun-resources.xml configuration properties as follows:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE resources PUBLIC "-//Sun Microsystems, Inc.//DTD Application Server 9.0 Resource Definitions //EN" "http://www.sun.com/software/appserver/dtds/sun-resources_1_3.dtd">
<resources>
<admin-object-resource
enabled="true"
jndi-name="jms/UpdateQueue"
object-type="user"
res-adapter="jmsra"
res-type="javax.jms.Queue">
<description/>
<property name="Name" value="UpdatePhysicalQueue"/>
</admin-object-resource>
<connector-resource
enabled="true" jndi-name="jms/UpdateQueueFactory"
object-type="user"
pool-name="jms/UpdateQueueFactoryPool">
<description/>
</connector-resource>
<connector-connection-pool
associate-with-thread="false"
connection-creation-retry-attempts="0"
connection-creation-retry-interval-in-seconds="10"
connection-definition-name="javax.jms.QueueConnectionFactory"
connection-leak-reclaim="false"
connection-leak-timeout-in-seconds="0"
fail-all-connections="false"
idle-timeout-in-seconds="300"
is-connection-validation-required="false"
lazy-connection-association="false"
lazy-connection-enlistment="false"
match-connections="true"
max-connection-usage-count="0"
max-pool-size="32"
max-wait-time-in-millis="60000"
name="jms/UpdateFactoryPool"
pool-resize-quantity="2"
resource-adapter-name="jmsra"
steady-pool-size="8"
validate-atmost-once-period-in-seconds="0"/>
</resources>
Hmm .. further investigation revealed the following in the imq logs:
[17/Nov/2009:10:27:57 CST] ERROR sendMessage: Sending message failed. Connection ID: 427038234214377984:
com.sun.messaging.jmq.jmsserver.util.BrokerException: transaction failed: [B4303]: The maximum number of messages [1,000] that the producer can process in a single transaction (TID=427038234364096768) has been exceeded. Please either limit the # of messages per transaction or increase the imq.transaction.producer.maxNumMsgs property.
So what would I do if I needed to send more than 5000 messages at a time?
What I am trying to do is to read all the records in a table and update a particular field of each record based on the corresponding value of that record in a legacy table to which I have only read only access. This table has more than 10k records in it. As of now, I am sequentially going through each record in a for loop, getting the corresponding record from the legacy table, comparing the field values, updating the record if necessary and adding corresponding new records in other tables.
However, I was hoping to improve performance by processing all the records asynchronously. To do that I was thinking of sending each record info as a separate message and hence requiring so many messages.
To configure OpenMQ and set artitrary broker properties, have a look at this blog post.
But actually, I wouldn't advice to increase the imq.transaction.producer.maxNumMsgs property, at least not above the value recommended in the documentation:
The maximum number of messages that a producer can process in a single transaction. It is recommended that the value be less than 5000 to prevent the exhausting of resources.
If you need to send more messages, consider doing it in several transactions.