Maximum number of messages sent to a Queue in OpenMQ? - glassfish

I am currently using Glassfish v2.1 and I have set up a queue to send and receive messages from with Sesion beans and MDBs respectively. However, I have noticed that I can send only a maximum of 1000 messages to the queue. Is there any reason why I cannot send more than 1000 messages to the queue? I do have a "developer" profile setup for the glassfish domain. Could that be the reason? Or is there some resource configuration setting that I need to modify?
I have setup the sun-resources.xml configuration properties as follows:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE resources PUBLIC "-//Sun Microsystems, Inc.//DTD Application Server 9.0 Resource Definitions //EN" "http://www.sun.com/software/appserver/dtds/sun-resources_1_3.dtd">
<resources>
<admin-object-resource
enabled="true"
jndi-name="jms/UpdateQueue"
object-type="user"
res-adapter="jmsra"
res-type="javax.jms.Queue">
<description/>
<property name="Name" value="UpdatePhysicalQueue"/>
</admin-object-resource>
<connector-resource
enabled="true" jndi-name="jms/UpdateQueueFactory"
object-type="user"
pool-name="jms/UpdateQueueFactoryPool">
<description/>
</connector-resource>
<connector-connection-pool
associate-with-thread="false"
connection-creation-retry-attempts="0"
connection-creation-retry-interval-in-seconds="10"
connection-definition-name="javax.jms.QueueConnectionFactory"
connection-leak-reclaim="false"
connection-leak-timeout-in-seconds="0"
fail-all-connections="false"
idle-timeout-in-seconds="300"
is-connection-validation-required="false"
lazy-connection-association="false"
lazy-connection-enlistment="false"
match-connections="true"
max-connection-usage-count="0"
max-pool-size="32"
max-wait-time-in-millis="60000"
name="jms/UpdateFactoryPool"
pool-resize-quantity="2"
resource-adapter-name="jmsra"
steady-pool-size="8"
validate-atmost-once-period-in-seconds="0"/>
</resources>
Hmm .. further investigation revealed the following in the imq logs:
[17/Nov/2009:10:27:57 CST] ERROR sendMessage: Sending message failed. Connection ID: 427038234214377984:
com.sun.messaging.jmq.jmsserver.util.BrokerException: transaction failed: [B4303]: The maximum number of messages [1,000] that the producer can process in a single transaction (TID=427038234364096768) has been exceeded. Please either limit the # of messages per transaction or increase the imq.transaction.producer.maxNumMsgs property.
So what would I do if I needed to send more than 5000 messages at a time?
What I am trying to do is to read all the records in a table and update a particular field of each record based on the corresponding value of that record in a legacy table to which I have only read only access. This table has more than 10k records in it. As of now, I am sequentially going through each record in a for loop, getting the corresponding record from the legacy table, comparing the field values, updating the record if necessary and adding corresponding new records in other tables.
However, I was hoping to improve performance by processing all the records asynchronously. To do that I was thinking of sending each record info as a separate message and hence requiring so many messages.

To configure OpenMQ and set artitrary broker properties, have a look at this blog post.
But actually, I wouldn't advice to increase the imq.transaction.producer.maxNumMsgs property, at least not above the value recommended in the documentation:
The maximum number of messages that a producer can process in a single transaction. It is recommended that the value be less than 5000 to prevent the exhausting of resources.
If you need to send more messages, consider doing it in several transactions.

Related

Spring JMS - Message-driven-channel-adapter The number of consumers doesn't reduce to the standard level

I have a message-driven-channel-adapter and I defined the max-concurrent-consumers as 100 and concurrent-consumers as 2. When I tried a load test, I saw that the concurrent-consumers increased but after the load test, The number of consumers didn't reduce to the standard level. I'm checking it with RabbitMQ management portal.
When the project restarted (no load test), the GET (Empty) is 650/s but after load test it stays about 2500/s. It is not returning to 650/s. I think concurrent-consumers property is being increased to a number but is not being reduced to original value.
How can I make it to reduce to normal level again?
Here is my message-driven-channel-adapter definition:
<int-jms:message-driven-channel-adapter id="inboundAdapter"
auto-startup="true"
connection-factory="jmsConnectionFactory"
destination="inboundQueue"
channel="requestChannel"
error-channel="errorHandlerChannel"
receive-timeout="-1"
concurrent-consumers="2"
max-concurrent-consumers="100" />
With receiveTimeout=-1; the container has no control over the idle consumer (it is blocked in the jms client).
You also need to set max-messages-per-task for the container to consider stopping a consumer.
<int-jms:message-driven-channel-adapter id="inboundAdapter"
auto-startup="true"
connection-factory="jmsConnectionFactory"
destination-name="inboundQueue"
channel="requestChannel"
error-channel="errorHandlerChannel"
receive-timeout="5000"
concurrent-consumers="2"
max-messages-per-task="10"
max-concurrent-consumers="100" />
The time elapsed for an idle consumer is receiveTimeout * maxMessagesPerTask.

Gemfire WAN Gateway-sender configuration

We are using the Gemfire WAN topology and have problems setting up the gateway-senders.
Couple of assumptions:
- Replicated regions
- Serial gateway-senders
- manual-start is false for all gateway-senders
Let's say we have 2 clusters, within each cluster, we have 2 members (Member A and Member B)
Member A's cache.xml
<gfe:gateway-sender id="gateway-sender-A" parallel="false" remote-distributed-system-id="2" manual-start="false" />
<gfe:replicated-region name="data" scope="DISTRIBUTED_NO_ACK">
<gfe:replicated-region name="subData" data-policy="REPLICATE" scope="DISTRIBUTED_ACK">
<gfe:gateway-sender-ref bean="gateway-sender-A"/>
</gfe:replicated-region>
</gfe:replicated-region>
Member B's cache.xml
<gfe:gateway-sender id="gateway-sender-B" parallel="false" remote-distributed-system-id="2" manual-start="false" />
<gfe:replicated-region name="data" scope="DISTRIBUTED_NO_ACK">
<gfe:replicated-region name="subData" data-policy="REPLICATE" scope="DISTRIBUTED_ACK">
<gfe:gateway-sender-ref bean="gateway-sender-B"/>
</gfe:replicated-region>
</gfe:replicated-region>
There is a problem when we run start up the two members within one cluster. It raises this error:
java.lang.IllegalStateException: Cannot create Region /data with [gateway-sender-A] gateway sender ids because another cache has the same region defined with [gateway-sender-B] gateway sender ids
Looking at the "High Availability for Gateway Senders" documentation, our understanding is that we can create 2 gateway-senders, in which only one will be doing the sending at a given point in time. Ultimately, we want to have 2 gateway senders (one in each member) for one cache region, one as the primary sender and the other as the secondary sender.
Thanks
From Geode documentation, it says
For serial Senders, Queue HA is achieved by configuring identical serial Senders in multiple members. The Queue is replicated between the members.
So if the two gateway senders in member A and B is doing the same job (except their primary/secondary roles), you should use "identical" setting.
Among gateway senders, there will be one that manages to acquire a specific distributed lock and becomes the primary sender, usually the first one comes up. I don't see a property to force one to become primary.
Geode is the open source version of Gemfire if you wonder.
After changing the sender-ids to the same for both members, we have another problem:
java.lang.IllegalStateException: Cannot create Gateway Sender
"some-gateway-sender-id" with manual start "false" because another
cache has the same Gateway Sender defined with manual start "true
It seems like our problem was the inconsistent format.
Member A used XML format
<?xml version="1.0"?>
<!DOCTYPE cache PUBLIC "-//GemStone Systems, Inc.//GemFire Declarative Caching 8.0//EN" "http://www.gemstone.com/dtd/cache8_0.dtd">
Member B used Spring Gemfire Data format
xmlns:gfe="http://www.springframework.org/schema/gemfire" xsi:schemaLocation="http://www.springframework.org/schema/gemfire http://www.springframework.org/schema/gemfire/spring-gemfire.xsd">
<gfe:gateway-sender ...>
We switched to using the Spring Gemfire Data format for both members, and it solved both issues.
TL;DR, the gateway-senders need to have the same ids if they are in the same cluster and use consistent cache xml formats.

How to get number rows updated/added when we make DB call via JCA files in osb proxy service

I am as a client inserting/updating/fetching values to/from back-end DB via JCA files creating business service and making the call. I am facing problem while performing insert/update call as for all the request i will be getting success response irrespective of the DB getting added/updated. If there is a way to confirm like these many rows got updated after insert/update DB then it confirms like operation is successful.
Below is the simple JCA file to update the DB, can you please let me know what extra configuration i need to do to get the number of rows get updated..!
<adapter-config name="RetrieveSecCustRelationship" adapter="Database Adapter" wsdlLocation="RetrieveSecCustRelationship.wsdl" xmlns="http://platform.integration.oracle/blocks/adapter/fw/metadata">
<connection-factory location="eis/DB/Database" UIConnectionName="Database" adapterRef=""/>
<endpoint-interaction portType="RetrieveSecCustRelationship_ptt" operation="RetrieveSecCustRelationship">
<interaction-spec className="oracle.tip.adapter.db.DBPureSQLInteractionSpec">
<property name="SqlString" value=**"update CUSTOMER_INSTALLED_PRODUCT set CUSTOMER_ID=? where CUSTOMER_ID=?"**/>
<property name="GetActiveUnitOfWork" value="false"/>
<property name="QueryTimeout" value="6"/>
</interaction-spec>
<input/>
<output/>
</endpoint-interaction>
</adapter-config>
Thanks & Regards
I'm afraid you will need to wrap it in PL/SQL and then extend that PL/SQL so number of affected rows is being returned. Then you could extract this value from response variable with XPath.

SAP Pool capacity and peak limit-Mule esb

Hi am having an SAP connector and SAP is sending 10,000 idocs parallel what can be the good number I can give in Pooling capacity and Peak limit,Now I have given 2000 and 1000 respectively any suggestion for better performance
<sap:connector name="SAP" jcoAsHost="${saphost}" jcoUser="${sapuser}" jcoPasswd="${sappassword}" jcoSysnr="${sapsystemnumber}" jcoClient="${sapclient}" jcoLang="${saploginlanguage}" validateConnections="true" doc:name="SAP" jcoPeakLimit="1000" jcoPoolCapacity="2000"/>
When dealing with big amounts of IDocs, the first thing I recommend to improve performance is to configure the SAP system to send IDocs in batches instead of sending them individually. That is, the IDocs are sent in groups of X number you define as a batch size in the Partner Profile section of SAPGUI. Among other settings, you have to set "Pack. Size" and select "Collect IDocs" as Output Mode.
If you are not familiar with SAPGUI, request your SAP Admin to configure it for you.
Aditionally, to extract the most out of the connection pooling from the SAP connector, I suggest you use SAP Client Extended Properties to get full advantage of JCo additional connection parameters. These extended properties are defined in a Spring Bean and set in jcoClientExtendedProperties at connector or endpoint level. Take a look at the following example:
<spring:beans>
<spring:bean name="sapClientProperties" class="java.util.HashMap">
<spring:constructor-arg>
<spring:map>
<!-- Maximum number of active connections that can be created for a destination simultaneously -->
<spring:entry key="jco.destination.peak_limit" value="15"/>
<!-- Maximum number of idle connections kept open by the destination. A value of 0 has the effect that there is no connection pooling, i.e. connections will be closed after each request. -->
<spring:entry key="jco.destination.pool_capacity" value="10"/>
<!-- Time in ms after that the connections hold by the internal pool can be closed -->
<spring:entry key="jco.destination.expiration_time" value="300000"/>
<!-- Interval in ms with which the timeout checker thread checks the connections in the pool for expiration -->
<spring:entry key="jco.destination.expiration_check_period" value="60000"/>
<!-- Max time in ms to wait for a connection, if the max allowed number of connections is allocated by the application -->
<spring:entry key="jco.destination.max_get_client_time" value="30000"/>
</spring:map>
</spring:constructor-arg>
</spring:bean>
</spring:beans>
<sap:connector name="SAP"
jcoAsHost="${sap.jcoAsHost}"
jcoUser="${sap.jcoUser}"
jcoPasswd="${sap.jcoPasswd}"
jcoSysnr="${sap.jcoSysnr}"
jcoClient="${sap.jcoClient}"
...
jcoClientExtendedProperties-ref="sapClientProperties" />
Important: to enable the pool it is mandatory that you set the jcoExpirationTime additionally to the Peak Limit and Pool Capacity.
For further details, please refer to SAP Connector Advanced Features documentation.

Apigee Spike Arrest Rate Limit Application

I am an Apigee newbie.
I am trying to understand the Spike Arrest policy.
I am looking at this documentation:
http://apigee.com/docs/api-services/content/shield-apis-using-spikearrest
http://apigee.com/docs/api-services/content/policy-attachment-and-enforcement
The one thing I cannot understand for certain is if, when the Spike Arrest Policy is applied to an ApiProxy, whether the rate limit is applied per Key/Client Dev Application, or is it shared between all Keys/Client Dev Applications?
For example if we have the following config:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<SpikeArrest async="false" continueOnError="false" enabled="true" name="spikearrest-1">
<DisplayName>SpikeArrest-1</DisplayName>
<FaultRules/>
<Properties/>
<Identifier ref="request.header.some-header-name"/>
<MessageWeight ref="request.header.weight"/>
<Rate>50ps</Rate>
</SpikeArrest>
And Client Dev Apps:
1. DevApp1
2. DevApp2
Is the 50ps rate limit shared between DevApp1 and DevApp2, or do DevApp1 and DevApp2 get 50ps rate limit each?
Thanks,
You can use any of the predefined variables:
http://apigee.com/docs/api-services/api/variables-reference
The variable that is probably the most commonly used for Spike Arrest is client.ip.
Edge will make all elements of a request message available. If your clients are adding a client_id (aka API key) to a request as a query parameter, for example api.call.com?client_id=u34r8ur, then you would set the variable in your Spike Arrest Identifier to be:
<Identifier ref="request.queryparam.client_id"/>
Or if it is in an HTTP header:
<Identifier ref="request.header.client_id"/>
Hope that helps!
Its per app identified by your identifier.