Issue in 2 node cluster using infinispan 5.3 | ActionStatus.ABORT_ONLY > is not in a valid state to be invoking cache operations on - infinispan

I have a setup of 2 node cluster using Infinispan 5.3. I am testing the failover scenario. When I killed one node, i'm getting the below exception (I'm using the sync cache). The cluster is not getting. So I need to restart the application, which is not practically possible in production environment
2020-05-06 18:50:28,082 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] ISPN000136: Execution error
java.lang.IllegalStateException: Transaction TransactionImple < ac, BasicAction: -3f57f478:dd0a:5eb2b455:2d461 status: ActionStatus.ABORT_ONLY > is not in a valid state to be invoking cache operations on.
at org.infinispan.interceptors.TxInterceptor.enlist(TxInterceptor.java:275)
at org.infinispan.interceptors.TxInterceptor.enlistIfNeeded(TxInterceptor.java:239)
at org.infinispan.interceptors.TxInterceptor.enlistReadAndInvokeNext(TxInterceptor.java:233)
at org.infinispan.interceptors.TxInterceptor.visitGetKeyValueCommand(TxInterceptor.java:229)
at org.infinispan.commands.read.GetKeyValueCommand.acceptVisitor(GetKeyValueCommand.java:62)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:120)
at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:134)
at org.infinispan.commands.AbstractVisitor.visitGetKeyValueCommand(AbstractVisitor.java:96)
at org.infinispan.commands.read.GetKeyValueCommand.acceptVisitor(GetKeyValueCommand.java:62)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:120)
at org.infinispan.statetransfer.StateTransferInterceptor.handleTopologyAffectedCommand(StateTransferInterceptor.java:216)
at org.infinispan.statetransfer.StateTransferInterceptor.handleDefault(StateTransferInterceptor.java:200)
at org.infinispan.commands.AbstractVisitor.visitGetKeyValueCommand(AbstractVisitor.java:96)
at org.infinispan.commands.read.GetKeyValueCommand.acceptVisitor(GetKeyValueCommand.java:62)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:120)
at org.infinispan.interceptors.CacheMgmtInterceptor.visitGetKeyValueCommand(CacheMgmtInterceptor.java:113)
at org.infinispan.commands.read.GetKeyValueCommand.acceptVisitor(GetKeyValueCommand.java:62)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:120)
at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:134)
at org.infinispan.commands.AbstractVisitor.visitGetKeyValueCommand(AbstractVisitor.java:96)
at org.infinispan.interceptors.IsMarshallableInterceptor.visitGetKeyValueCommand(IsMarshallableInterceptor.java:97)
at org.infinispan.commands.read.GetKeyValueCommand.acceptVisitor(GetKeyValueCommand.java:62)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:120)
at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:128)
at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:92)
at org.infinispan.commands.AbstractVisitor.visitGetKeyValueCommand(AbstractVisitor.java:96)
at org.infinispan.commands.read.GetKeyValueCommand.acceptVisitor(GetKeyValueCommand.java:62)
at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:343)
at org.infinispan.CacheImpl.containsKey(CacheImpl.java:372)
at org.infinispan.DecoratedCache.containsKey(DecoratedCache.java:410)
at com.abcr.ServiceContext.existsInSyncCache(ServiceContext.java:1740)
at com.abcr.ServiceContext.getObjectForUpdateInSyncCache(ServiceContext.java:1778)
at com.abcr.core.cache.ClusterServiceNodeListCacheManager.getObjectForUpdate(ClusterServiceNodeListCacheManager.java:90)
at com.suntecgroup.tbms.tpe.core.server.ServerManager.callBackOnMembersModified(ServerManager.java:3385)
at com.abcr.core.ServiceContainerCommandDespatcher.run(ServiceContainerCommandDespatcher.java:64)
2020-05-06 18:50:28,086 ERROR [com.abcr.core.ServiceContainer] Invocation of callback APIs on leaving coordinator role failed for service 'ABC'.
com.suntecgroup.tbms.container.services.ContainerPlatformServicesException: Failed to retrieve object[SERVER/SERVICE_NODES/28000] for update.
This my infinspan and jgroups configuration
<infinispan
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:infinispan:config:5.3 http://www.infinispan.org/schemas/infinispan-config-5.3.xsd"
xmlns="urn:infinispan:config:5.3">
<global>
<!-- Note that if these are left blank, defaults are used. See the user guide for what these defaults are -->
<asyncListenerExecutor factory="org.infinispan.executors.DefaultExecutorFactory">
<properties>
<property name="maxThreads" value="5" />
<property name="threadNamePrefix" value="AsyncListenerThread" />
</properties>
</asyncListenerExecutor>
<asyncTransportExecutor factory="org.infinispan.executors.DefaultExecutorFactory">
<properties>
<property name="maxThreads" value="25" />
<property name="threadNamePrefix" value="AsyncSerializationThread" />
</properties>
</asyncTransportExecutor>
<evictionScheduledExecutor factory="org.infinispan.executors.DefaultScheduledExecutorFactory">
<properties>
<property name="threadNamePrefix" value="EvictionThread" />
</properties>
</evictionScheduledExecutor>
<replicationQueueScheduledExecutor factory="org.infinispan.executors.DefaultScheduledExecutorFactory">
<properties>
<property name="threadNamePrefix" value="ReplicationQueueThread" />
</properties>
</replicationQueueScheduledExecutor>
<globalJmxStatistics enabled="false" jmxDomain="infinispan_1" />
<!--
If the transport is omitted, there is no way to create distributed or clustered caches.
There is no added cost to defining a transport but not creating a cache that uses one, since the transport
is created and initialized lazily.
-->
<transport clusterName="PC_SITE_1" distributedSyncTimeout="50000" transportClass="org.infinispan.remoting.transport.jgroups.JGroupsTransport">
<properties>
<property name="configurationFile" value="./tmp/_clusterconfig/pc_jgroups_main_sync.xml" />
</properties>
</transport>
<!-- Note that the JGroups transport uses sensible defaults if no configuration property is defined. -->
<!-- See the JGroupsTransport javadocs for more flags -->
<!-- Again, sensible defaults are used here if this is omitted. -->
<serialization marshallerClass="org.infinispan.marshall.VersionAwareMarshaller" version="1.0" />
<!--
Used to register JVM shutdown hooks.
hookBehavior: DEFAULT, REGISTER, DONT_REGISTER
-->
<shutdown hookBehavior="DEFAULT" />
</global>
<!-- *************************** -->
<!-- Default "template" settings -->
<!-- *************************** -->
<!-- this is used as a "template" configuration for all caches in the system. -->
<default>
<!--
isolation levels supported: READ_COMMITTED and REPEATABLE_READ
-->
<locking isolationLevel="READ_COMMITTED" lockAcquisitionTimeout="60000" writeSkewCheck="false" concurrencyLevel="5000" useLockStriping="false" />
<!--
Used to register a transaction manager and participate in ongoing transactions.
-->
<!-- ECPCacheTxManagerLookup -->
<!--
Used to register JMX statistics in any available MBean server
-->
<jmxStatistics enabled="false" />
<!--
Used to enable invocation batching and allow the use of Cache.startBatch()/endBatch() methods.
-->
<clustering mode="replication">
<sync replTimeout="600000" />
<stateTransfer timeout="480000" fetchInMemoryState="true" />
</clustering>
<storeAsBinary enabled="true" />
</default>
<namedCache name="GLOBAL_SYNC_CACHE">
<transaction transactionMode="TRANSACTIONAL" transactionManagerLookupClass="com.suntecgroup.tbms.container.services.cluster.ContainerCacheTxManagerLookup" syncRollbackPhase="false" syncCommitPhase="true" useEagerLocking="true" lockingMode="PESSIMISTIC" />
</namedCache>
<namedCache name="GLOBAL_NONTX_SYNC_CACHE">
<transaction transactionMode="NON_TRANSACTIONAL" />
</namedCache>
</infinispan>
JGROUPS configuration..
<?xml version="1.0" encoding="UTF-8"?>
<config xmlns="urn:org:jgroups" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:org:jgroups file:schema/JGroups-2.8.xsd">
<TCP bind_port="7800" loopback="true" recv_buf_size="20M" send_buf_size="640K" max_bundle_size="64000" max_bundle_timeout="30" enable_bundling="false" use_send_queues="true" sock_conn_timeout="300" tcp_nodelay="true" thread_pool.enabled="true" thread_pool.min_threads="1" thread_pool.max_threads="25" thread_pool.keep_alive_time="5000" thread_pool.queue_enabled="false" thread_pool.queue_max_size="100" thread_pool.rejection_policy="run" oob_thread_pool.enabled="true" oob_thread_pool.min_threads="1" oob_thread_pool.max_threads="8" oob_thread_pool.keep_alive_time="5000" oob_thread_pool.queue_enabled="false" oob_thread_pool.queue_max_size="100" oob_thread_pool.rejection_policy="run" enable_diagnostics="false" />
<!--MPING mcast_addr="232.1.2.13"
mcast_port="7500"
num_initial_members="2"
timeout="2000" /-->
<TCPPING timeout="3000" initial_hosts="${jgroups.tcpping.initial_hosts:localhost[7800],localhost[7801]}" port_range="0" num_initial_members="3" />
<MERGE2 max_interval="100000" min_interval="20000" />
<FD_SOCK />
<FD timeout="60000" max_tries="5" />
<VERIFY_SUSPECT timeout="30000" />
<BARRIER />
<pbcast.NAKACK use_mcast_xmit="false" exponential_backoff="500" discard_delivered_msgs="true" />
<UNICAST timeout="300,600,1200" />
<pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000" max_bytes="400000" />
<pbcast.GMS print_local_addr="false" join_timeout="3000" view_bundling="true" />
FRAG2 frag_size="60000"
pbcast.STATE_TRANSFER
</config>

Current transaction was aborted (probably due to a timeout, but maybe as a consequence of delivery failure). You need to rollback current transaction and start new.
However let me note that 5.3 was released 2013/06/26 - you're using almost 7 years old version. If there is a bug, no-one will even try to check it out.

Related

Apache Ignite Structured Logging

I am looking to enable structured logging for Ignite.
Ignite runs inside a docker container.
I enabled the log4j2 module and added a log4j2 configuration file that tries to use <JsonTemplateLayout.../> as described here but in the logs i get the message:
Console contains an invalid element or attribute "JsonTemplateLayout"
Which is probably caused by not having the log4j-layout-template-json dependency available inside ignite. Is there a way how to add the dependency to Ignite or is there another option on how to get structured logging working?
Ignite configuration:
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:util="http://www.springframework.org/schema/util"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/util
http://www.springframework.org/schema/util/spring-util.xsd">
<bean class="org.apache.ignite.configuration.IgniteConfiguration">
...
<property name="gridLogger">
<bean class="org.apache.ignite.logger.log4j2.Log4J2Logger">
<constructor-arg type="java.lang.String" value="config/ignite-log4j2-custom.xml"/>
</bean>
</property>
</bean>
</beans>
log4j2 configuration:
<?xml version="1.0" encoding="UTF-8"?>
<Configuration monitorInterval="60" status="debug">
<Appenders>
<Console name="CONSOLE" target="SYSTEM_OUT">
<!-- <PatternLayout pattern="[%d{ISO8601}][%-5p][%t][%c{1}]%notEmpty{[%markerSimpleName]} %m%n"/> -->
<ThresholdFilter level="ERROR" onMatch="DENY" onMismatch="ACCEPT"/>
<JsonTemplateLayout eventTemplateUri="classpath:EcsLayout.json"/>
</Console>
<Console name="CONSOLE_ERR" target="SYSTEM_ERR">
<!-- <PatternLayout pattern="[%d{ISO8601}][%-5p][%t][%c{1}]%notEmpty{[%markerSimpleName]} %m%n"/> -->
<JsonTemplateLayout eventTemplateUri="classpath:EcsLayout.json"/>
</Console>
<File name="CONSISTENCY" fileName="${sys:IGNITE_HOME}/work/log/consistency.log">
<PatternLayout>
<Pattern>"[%d{ISO8601}][%-5p][%t][%c{1}] %m%n"</Pattern>
</PatternLayout>
</File>
<Routing name="FILE">
<Routes pattern="$${sys:nodeId}">
<Route>
<RollingFile name="Rolling-${sys:nodeId}" fileName="${sys:IGNITE_HOME}/work/log/${sys:appId}-${sys:nodeId}.log"
filePattern="${sys:IGNITE_HOME}/work/log/${sys:appId}-${sys:nodeId}-%i-%d{yyyy-MM-dd}.log.gz">
<PatternLayout pattern="[%d{ISO8601}][%-5p][%t][%c{1}]%notEmpty{[%markerSimpleName]} %m%n"/>
<Policies>
<TimeBasedTriggeringPolicy interval="6" modulate="true" />
<SizeBasedTriggeringPolicy size="10 MB" />
</Policies>
</RollingFile>
</Route>
</Routes>
</Routing>
</Appenders>
<Loggers>
<!-- <Logger name="org.apache.ignite" level="INFO"/> -->
<!--
Uncomment to disable courtesy notices, such as SPI configuration
consistency warnings.
-->
<!--
<Logger name="org.apache.ignite.CourtesyConfigNotice" level=OFF/>
-->
<Logger name="org.springframework" level="WARN"/>
<Logger name="org.eclipse.jetty" level="WARN"/>
<Logger name="org.apache.ignite.internal.visor.consistency" additivity="false" level="INFO">
<AppenderRef ref="CONSISTENCY"/>
</Logger>
<!--
Avoid warnings about failed bind attempt when multiple nodes running on the same host.
-->
<Logger name="org.eclipse.jetty.util.log" level="ERROR"/>
<Logger name="org.eclipse.jetty.util.component" level="ERROR"/>
<Logger name="com.amazonaws" level="WARN"/>
<Root level="INFO">
<!-- Uncomment to enable logging to console. -->
<AppenderRef ref="CONSOLE" level="INFO"/>
<AppenderRef ref="CONSOLE_ERR" level="ERROR"/>
<AppenderRef ref="FILE" level="DEBUG"/>
</Root>
</Loggers>
</Configuration>
When adding the JAR to libs (as suggested by Stanislav below) i get a step further but also get an error (not a java person so any hint is highly appreciated):
main ERROR An exception occurred processing Appender CONSOLE org.apache.logging.log4j.core.appender.AppenderLoggingException: java.lang.IllegalAccessError: class org.apache.logging.log4j.layout.template.json.JsonTemplateLayout$StringBuilderEncoder tried to access method 'void org.apache.logging.log4j.core.layout.TextEncoderHelper.encodeText(java.nio.charset.CharsetEncoder, java.nio.CharBuffer, java.nio.ByteBuffer, java.lang.StringBuilder, org.apache.logging.log4j.core.layout.ByteBufferDestination)' (org.apache.logging.log4j.layout.template.json.JsonTemplateLayout$StringBuilderEncoder and org.apache.logging.log4j.core.layout.TextEncoderHelper are in unnamed module of loader 'app')
at org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:165)
at org.apache.logging.log4j.core.config.AppenderControl.callAppender0(AppenderControl.java:134)
at org.apache.logging.log4j.core.config.AppenderControl.callAppenderPreventRecursion(AppenderControl.java:125)
at org.apache.logging.log4j.core.config.AppenderControl.callAppender(AppenderControl.java:89)
at org.apache.logging.log4j.core.config.LoggerConfig.callAppenders(LoggerConfig.java:542)
at org.apache.logging.log4j.core.config.LoggerConfig.processLogEvent(LoggerConfig.java:500)
at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:483)
at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:417)
at org.apache.logging.log4j.core.config.AwaitCompletionReliabilityStrategy.log(AwaitCompletionReliabilityStrategy.java:82)
at org.apache.logging.log4j.core.Logger.log(Logger.java:161)
at org.apache.logging.log4j.spi.AbstractLogger.tryLogMessage(AbstractLogger.java:2205)
at org.apache.logging.log4j.spi.AbstractLogger.logMessageTrackRecursion(AbstractLogger.java:2159)
at org.apache.logging.log4j.spi.AbstractLogger.logMessageSafely(AbstractLogger.java:2142)
at org.apache.logging.log4j.spi.AbstractLogger.logMessage(AbstractLogger.java:2017)
at org.apache.logging.log4j.spi.AbstractLogger.logIfEnabled(AbstractLogger.java:1983)
at org.apache.logging.log4j.spi.AbstractLogger.info(AbstractLogger.java:1275)
at org.apache.ignite.logger.log4j2.Log4J2Logger.info(Log4J2Logger.java:472)
at org.apache.ignite.logger.log4j2.Log4J2Logger.info(Log4J2Logger.java:464)
at org.apache.ignite.internal.GridLoggerProxy.info(GridLoggerProxy.java:137)
at org.apache.ignite.internal.plugin.IgniteLogInfoProviderImpl.ackConfiguration(IgniteLogInfoProviderImpl.java:222)
at org.apache.ignite.internal.plugin.IgniteLogInfoProviderImpl.ackKernalInited(IgniteLogInfoProviderImpl.java:98)
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:902)
at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1799)
at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1721)
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1160)
at org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1054)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:940)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:839)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:709)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:678)
at org.apache.ignite.Ignition.start(Ignition.java:353)
at org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:365)
Caused by: java.lang.IllegalAccessError: class org.apache.logging.log4j.layout.template.json.JsonTemplateLayout$StringBuilderEncoder tried to access method 'void org.apache.logging.log4j.core.layout.TextEncoderHelper.encodeText(java.nio.charset.CharsetEncoder, java.nio.CharBuffer, java.nio.ByteBuffer, java.lang.StringBuilder, org.apache.logging.log4j.core.layout.ByteBufferDestination)' (org.apache.logging.log4j.layout.template.json.JsonTemplateLayout$StringBuilderEncoder and org.apache.logging.log4j.core.layout.TextEncoderHelper are in unnamed module of loader 'app')
at org.apache.logging.log4j.layout.template.json.JsonTemplateLayout$StringBuilderEncoder.encode(JsonTemplateLayout.java:241)
at org.apache.logging.log4j.layout.template.json.JsonTemplateLayout$StringBuilderEncoder.encode(JsonTemplateLayout.java:216)
at org.apache.logging.log4j.layout.template.json.JsonTemplateLayout.encode(JsonTemplateLayout.java:304)
at org.apache.logging.log4j.layout.template.json.JsonTemplateLayout.encode(JsonTemplateLayout.java:58)
at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.directEncodeEvent(AbstractOutputStreamAppender.java:197)
at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.tryAppend(AbstractOutputStreamAppender.java:190)
at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.append(AbstractOutputStreamAppender.java:181)
at org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:161)
... 31 more
Solution
As Stanislav Lukyanov (see accepted answer) suggested the solution was to just download the JAR and place it below $IGNITE_HOME/libs. The error mentioned above was caused by a version mismatch. Having the following JARs with correct version made it work:
log4j-api-2.17.1.jar (default provided by ignite distribution)
log4j-core-2.17.1.jar (default provided by ignite distribution)
ignite-log4j2-2.13.0.jar (default provided by ignite distribution)
log4j-layout-template-json-2.17.1.jar (added, did not work with version 2.18.x)
If you run Ignite using Maven, you'll need to add the required dependency to your application POM, as described in the documentation:
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-layout-template-json</artifactId>
<version>2.18.0</version>
</dependency>
If you run Ignite using a ZIP distribution, you'll need to download the dependency as a JAR, e.g. from here and add it to the $IGNITE_HOME/libs.

Implementation of Qserver and Qmux

I'm learning about jpos. I got jpos programmer's guide with me and I have gone through that.
It helped me a lot in developing my switch.
I have some questions.
Whether I created my switch Correctly?
What are best practices for these situation:
I have a switch get diffrent messages (two types of channel/packager) from one switch and response back to it (qserver and qmux or only qmux or ...)
I have a switch get diffrent messages from diffrent banks and should reply them or forward messages to another (do I need qserver or just a qmux or need to have more configs on txmanager side)
now Im going to implement first situation In my switch flow is in this way:
Q2 will deploy by xml files:
Server.xml
RequestListener Class
TransactionMangare.xml
Here, in QMUX, what I don't understand is:
Q2 will deploy by xml files
(then I have QMUX.xml,Channeladaptor.xml and Listener Class [I want to know in what order it should come])
TransactionMangare.xml
messages with diffrent channel would come to me through a single port. is it possible?
as Qmux would use key (Bit41 and mti) which needs to specify channel before giving to the right one (otherwise would get 41 wrong or it would not parse it at all)
here are my deploy files:
logger.xml
<?xml version="1.0" encoding="UTF-8"?>
<logger name="Q2Logger" class="org.jpos.q2.qbean.LoggerAdaptor">
<log-listener class="org.jpos.util.SimpleLogListener" />
</logger>
server.xml
<?xml version="1.0" encoding="UTF-8"?>
<server class="org.jpos.q2.iso.QServer" logger="Q2Logger" name="TransactionServer">
<attr name="port" type="java.lang.Integer">88800</attr>
<attr name="maxSessions" type="java.lang.Integer">20</attr>
<attr name="minSessions" type="java.lang.Integer">10</attr>
<!-- packager is customised (I also need another packager for another type of messages) -->
<channel class="org.jpos.iso.channel.ASCIIChannel" name="ASCIIChannel" logger="Q2Logger" packager="com.example.transaction.packager.ISO93APackager" header="ISO51300000">
<property name="timeout" value="70000" /> <!-- 7 minutes -->
<property name="keep-alive" value="true" />
</channel>
<!-- can have realm="incoming-request-listener" -> to automatically put into space ???? -->
<request-listener class="com.example.transaction.listener.ServerRequestListener" logger="Q2Logger" name="isoListener">
<property name="space" value="transient:default" />
<property name="queue" value="CerditCardTXNQueue" />
<property name="spaceTimeout" value="60000" />
</request-listener>
</server>
txnmgr.xml
<?xml version="1.0" encoding="UTF-8"?>
<txnmgr name="CreditCardTransactionManager" logger="Q2Logger" class="org.jpos.transaction.TransactionManager">
<property name="space" value="transient:default" />
<property name="queue" value="TXNQueue" />
<property name="sessions" value="5" />
<property name="debug" value="true" />
<participant class="com.example.transaction.participants.PrepareParticipant" logger="Q2Logger" />
<participant class="com.example.transaction.participants.ValidationParticipant" logger="Q2Logger" />
<participant class="com.example.transaction.participants.ProcessParticipant" logger="Q2Logger" />
<participant class="com.example.transaction.participants.SendResponseParticipant" logger="Q2Logger"/>
</txnmgr>
Thanks,

Configuring Consumer Cancellation in RabbitMQ

We are using an 2-Node active-active RabbitMQ cluster with mirrored queue. With the mirroring policy being :
"policies":[{"vhost":"/","name":"ha-all","pattern":"","apply->to":"all","definition":{"ha-mode":"all","ha-sync-mode":"automatic"},"priority":0}]
Versions : RabbitMQ 3.5.4, Erlang 17.4 , spring-amqp/spring-rabbit :1.4.5.RELEASE
Now,we are trying to achieve consumer cancellation,as mentioned in Highly Available Queues.
However,since we have not used channel,we can't use {{basicConsumer}} method as given in the above link.
How do I set,"x-cancel-on-ha-failover" to true in the configuration,itself?
With the beans xml being thus :
<rabbit:connection-factory id="connectionFactory"
addresses="localhost:5672"
username="guest"
password="guest"
channel-cache-size="5" />
<!-- CREATE THE JsonMessageConverter BEAN -->
<bean id="jsonMessageConverter" class="org.springframework.amqp.support.converter.JsonMessageConverter" />
<!-- Spring AMQP Template -->
<rabbit:template id="amqpTemplate" connection-factory="connectionFactory" retry-template="retryTemplate" message-converter="jsonMessageConverter" />
<!-- in case connection is broken then Retry based on the below policy -->
<bean id="retryTemplate" class="org.springframework.retry.support.RetryTemplate">
<property name="backOffPolicy">
<bean class="org.springframework.retry.backoff.ExponentialBackOffPolicy">
<property name="initialInterval" value="500" />
<property name="multiplier" value="2" />
<property name="maxInterval" value="30000" />
</bean>
</property>
</bean>
<rabbit:queue name="testQueue" durable="true">
<rabbit:queue-arguments>
<entry key="x-max-priority">
<value type="java.lang.Integer">10</value>
</entry>
</rabbit:queue-arguments>
</rabbit:queue>
<bean id="messsageConsumer" class="consumer.RabbitConsumer">
</bean>
<rabbit:listener-container
connection-factory="connectionFactory" concurrency="5" max-concurrency="5" message-converter="jsonMessageConverter">
<rabbit:listener queues="testQueue" ref="messsageConsumer" />
</rabbit:listener-container>
The <rabbit:listener-container> actually populates a SimpleMessageListenerContainer bean on background. And the last one supports public void setConsumerArguments(Map<String, Object> args) on the matter.
So, to fix your requirements you just need to build the raw SimpleMessageListenerContainer <bean> for your messsageConsumer.
Meanwhile you are fixing that for your application, I'd ask you for the JIRA regarding adding <consumer-arguments> component. And we may be able to address it with the current GA deadline.

JBAS011466: PersistenceProvider 'com.impetus.kundera.KunderaPersistence' not found error from kundera 2.7.1

I am new to jBoss (7.1) and Kundera (2.7.1) and I'm working on a project and I want to use Cassandra dataSource implementing JPA using Kundera. my persistence.xml is as follows:
<persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"<br/>
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence<br/>
http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd"<br/>
version="1.0"><br/>
<persistence-unit name="cassandra_pu"><br/>
<provider>com.impetus.kundera.KunderaPersistence</provider><br/>
<properties><br/>
<property name="kundera.nodes" value="localhost" /><br/>
<property name="kundera.port" value="9160" />
<property name="kundera.keyspace" value="ech" />
<property name="kundera.dialect" value="cassandra" />
<property name="kundera.client.lookup.class" value="com.impetus.client.cassandra.thrift.ThriftClientFactory" />
<property name="kundera.cache.provider.class" value="com.impetus.kundera.cache.ehcache.EhCacheProvider"/>
<property name="kundera.cache.config.resource" value="/ehcache-test.xml" />
<!-- <property name="jboss.as.jpa.managed" value="false"/>-->
</properties>
</persistence-unit>
</persistence>
and my applicationContext.xml is as follows
?<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:aop="http://www.springframework.org/schema/aop"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd">
<bean id="personDAO" class="com.impetus.kundera.examples.spring.PersonDAO">
<property name="entityManagerFactory" ref="entityManagerFactory" />
</bean>
<bean id="entityManagerFactory"
class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="persistenceUnitName" value="cassandra_pu" />
</bean>
</beans>
when I remove "jboss.as.jpa.managed = false" property from persistence.xml, i am receiving JBAS011466: PersistenceProvider 'com.impetus.kundera.KunderaPersistence' not found error and if I place "jboss.as.jpa.managed=false" property, i am endup with No persistence unit with name 'cassandra_pu' found error
Complete trace for both errors:
No persistence unit with name 'cassandra_pu' found --- persistence.xml contains property name="jboss.as.jpa.managed" value="false"
Caused by: java.lang.IllegalArgumentException: No persistence unit with name 'cassandra_pu' found
at org.springframework.orm.jpa.persistenceunit.DefaultPersistenceUnitManager.obtainPersistenceUnitInfo(DefaultPersistenceUnitManager.java:566) [spring-orm-3.2.4.RELEASE.jar:3.2.4.RELEASE]
at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.determinePersistenceUnitInfo(LocalContainerEntityManagerFactoryBean.java:308) [spring-orm-3.2.4.RELEASE.jar:3.2.4.RELEASE]
at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.createNativeEntityManagerFactory(LocalContainerEntityManagerFactoryBean.java:260) [spring-orm-3.2.4.RELEASE.jar:3.2.4.RELEASE]
at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.afterPropertiesSet(AbstractEntityManagerFactoryBean.java:310) [spring-orm-3.2.4.RELEASE.jar:3.2.4.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1541) [spring-beans-3.2.4.RELEASE.jar:3.2.4.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1479) [spring-beans-3.2.4.RELEASE.jar:3.2.4.RELEASE]
... 20 more
JBAS011466: PersistenceProvider 'com.impetus.kundera.KunderaPersistence' not found --- remove from persistence.xml - property name="jboss.as.jpa.managed" value="false"
Caused by: javax.persistence.PersistenceException: JBAS011466: PersistenceProvider 'com.impetus.kundera.KunderaPersistence' not found
at org.jboss.as.jpa.processor.PersistenceUnitDeploymentProcessor.lookupProvider(PersistenceUnitDeploymentProcessor.java:560)
at org.jboss.as.jpa.processor.PersistenceUnitDeploymentProcessor.deployPersistenceUnit(PersistenceUnitDeploymentProcessor.java:297)
at org.jboss.as.jpa.processor.PersistenceUnitDeploymentProcessor.addPuService(PersistenceUnitDeploymentProcessor.java:260)
at org.jboss.as.jpa.processor.PersistenceUnitDeploymentProcessor.handleEarDeployment(PersistenceUnitDeploymentProcessor.java:218)
at org.jboss.as.jpa.processor.PersistenceUnitDeploymentProcessor.deploy(PersistenceUnitDeploymentProcessor.java:121)
at org.jboss.as.server.deployment.DeploymentUnitPhaseService.start(DeploymentUnitPhaseService.java:116) [jboss-as-server-7.1.3.Final-redhat-4.jar:7.1.3.Final-redhat-4]
... 5 more
jars in ear is
cassandra-connection-pool-0.7.1.jar
commons-lang-2.4.jar
commons-logging-1.1.1.jar
commons-pool-1.2.jar
guava-14.0.1.jar
hibernate-commons-annotations-4.0.2.Final.jar
hibernate-core-4.2.3.Final.jar
javassist-3.15.0-GA.jar
jta-1.1.jar
jts-1.11.jar
kundera-cassandra-2.7.1.jar
kundera-core-2.7.1.jar
lucene-core-3.5.0.jar
persistence-api-1.0.jar
slf4j-api-1.7.2.jar
spring-beans-3.0.0.RELEASE.jar
spring-context-3.0.0.RELEASE.jar
spring-core-3.0.0.RELEASE.jar
spring-jdbc-3.0.0.RELEASE.jar
spring-orm-3.0.0.RELEASE.jar
spring-tx-3.0.0.RELEASE.jar
P.S - I dont have any jpa related folders under \JBOSS_HOME>\modules\org\apache and my persistence.xml is present under MyEAR.ear>META-INF>persistence.xml (along with application.xml, jboss-deployment-structure.xml and MANIFEST.MF)
Hope I've provided all the necessary informations. I am stuck from entire day at this. Any help is highly appreciated. Thank you.
This should help you out. please have a look at https://github.com/impetus-opensource/Kundera/issues/390 https://github.com/impetus-opensource/Kundera/issues/180

trying to get a sample petstore using spring mybatis project working in intellij

I have downloaded the sample from http://mybatis.github.io/spring/sample.html.
1)I then open the pom.xml and imported it into intellij
2)I added a spring MVC facet
3) i added a web facet
4) i am using intellij 12.1.6
Once done, the autowiring is failing. I am trying to learn this new framework
All the service autowires have an error similar to::
Could not autowire. No beans of 'LineItemMapper' type found
public class OrderService {
#Autowired
private ItemMapper itemMapper;
#Autowired
private OrderMapper orderMapper;
#Autowired
private SequenceMapper sequenceMapper;
#Autowired
private LineItemMapper lineItemMapper;
I am thinking it is something i have setup incorrectly in my project.
this is the provided applicationCOntext from the example
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:aop="http://www.springframework.org/schema/aop"
xmlns:tx="http://www.springframework.org/schema/tx"
xmlns:jdbc="http://www.springframework.org/schema/jdbc"
xmlns:context="http://www.springframework.org/schema/context"
xsi:schemaLocation="
http://www.springframework.org/schema/context
http://www.springframework.org/schema/context/spring-context-3.0.xsd
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/jdbc
http://www.springframework.org/schema/jdbc/spring-jdbc-3.0.xsd
http://www.springframework.org/schema/tx
http://www.springframework.org/schema/tx/spring-tx-3.0.xsd
http://www.springframework.org/schema/aop
http://www.springframework.org/schema/aop/spring-aop-3.0.xsd">
<!-- in-memory database and a datasource -->
<jdbc:embedded-database id="dataSource">
<jdbc:script location="classpath:database/jpetstore-hsqldb-schema.sql"/>
<jdbc:script location="classpath:database/jpetstore-hsqldb-dataload.sql"/>
</jdbc:embedded-database>
<!-- transaction manager, use JtaTransactionManager for global tx -->
<bean id="transactionManager"
class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSource" />
</bean>
<!-- enable component scanning (beware that this does not enable mapper scanning!) -->
<context:component-scan base-package="org.mybatis.jpetstore.service" />
<!-- enable autowire -->
<context:annotation-config />
<!-- enable transaction demarcation with annotations -->
<tx:annotation-driven />
<!-- define the SqlSessionFactory -->
<bean id="sqlSessionFactory" class="org.mybatis.spring.SqlSessionFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="typeAliasesPackage" value="org.mybatis.jpetstore.domain" />
</bean>
<!-- scan for mappers and let them be autowired -->
<bean class="org.mybatis.spring.mapper.MapperScannerConfigurer">
<property name="basePackage" value="org.mybatis.jpetstore.persistence" />
</bean>
</beans>
this is a zip of the project https://www.dropbox.com/s/lohr3udnm0oa2hn/mybatis-jpetstore-6.0.1-sources.zip
Hopefully someone can point me to what i am doing incorrectly
Thanks for any help
The first thing to do is to try using the latest version of IntelliJ, released earlier this week. It has improved support for spring.
The underlying problem is that you need to set up IntelliJ's spring facet and assign the application context groups.