How can I ensure that my Axon Saga is notified of events which match the associationProperty? - saga

I have a project which I am developing with Axon, but my Saga is not properly executing.
My Saga contains lines similar to this:
#StartSaga
#SagaEventHandler(associationProperty = "uuid")
public void handle(FirstEvent event) {
System.out.println("Processing FirstEvent for uuid=" + event.getUuid());
associateWith("uuid", event.getUuid().toString());
initialiseWorkflow(event.getUuid(), Status.CREATED) ;
}
#SagaEventHandler(associationProperty = "uuid")
public void handle(SecondEvent event) {
System.out.println("Processing SecondEvent for uuid=" + event.getUuid());
this.processStep(STEP_2,event.getUuid());
}
The First event is triggering a start saga, and also triggering the initialiseWorkflow tasks (which correctly creates a set of additional steps.)
However, when the SecondEvent arrives (with same UUID associationProperty value as FirstEvent,) the saga does not pick up that second event.
I have tried specifically including the following line to enhance the association, but that also did not work:
associateWith("uuid", event.getUuid().toString());
Ironically, I have a test case, using the axon testing framework which works correctly, and this is similar to:
#Test
public void testSecondEvent() {
fixture.givenAggregate(uuid).published(new FirstEvent(uuid))
.whenAggregate(uuid).publishes(new SecondEvent(uuid))
.expectDispatchedCommandsMatching(exactSequenceOf(
new CompleteTaskCommandMatcher("SecondEvent")));
}
The problem occurs in my end to end tests, which I am putting commands directly into the CommandGateway, and checking the results directly in the repository.
I have double checked that the AnnotatedSagaManager is being used, and it is.
Does anyone have any ideas on what could be wrong, or have I misunderstood how Sagas should work ?
EDIT:
A few more updates:
1) I noticed that I needed to use toString() when directly associating the UUID, so I tried making the value into a string for the event - no progress.
2) I tried printing out the associated values, and I have seen that the direct association line is not required (the uuid is associated during the start saga process)
3) I tried putting #StartSaga on the secondEvent, and this reached the code for "Processing SecondEvent ... " but in a new saga.
More understanding, but no solution yet !

I have found the cause of the problem...
I based my configuration on the Mongo profile for the AxonTrader sample app.
However, the AxonTrader persistence-infrastructure-context.xml (shown next) contains a flaw:
<beans profile="mongodb">
<bean id="mongoSpringTemplate" class="org.springframework.data.mongodb.core.MongoTemplate">
<constructor-arg name="mongo" ref="mongo"/>
<constructor-arg name="databaseName" value="axontrader"/>
</bean>
<bean id="mongoTemplate" class="org.axonframework.eventstore.mongo.DefaultMongoTemplate">
<constructor-arg index="0" ref="mongo"/>
<constructor-arg index="1" value="axontrader"/>
<constructor-arg index="2" value="domainevents"/>
<constructor-arg index="3" value="snapshotevents"/>
<constructor-arg index="4">
<null/>
</constructor-arg>
<constructor-arg index="5">
<null/>
</constructor-arg>
</bean>
<bean id="mongoSagaTemplate" class="org.axonframework.saga.repository.mongo.DefaultMongoTemplate">
<constructor-arg index="0" ref="mongo"/>
<constructor-arg index="1" value="axontrader"/>
<constructor-arg index="2" value="snapshotevents"/>
<constructor-arg index="3">
<null/>
</constructor-arg>
<constructor-arg index="4">
<null/>
</constructor-arg>
</bean>
<mongo:mongo id="mongo" host="127.0.0.1" port="27017"/>
</beans>
As you can see from the snippet above the eventStore and sagaRepository are both using "snapshotevents" as their parameter. However, the snapshot events is relevant to the eventStore only, and appears to cause a conflict when combined with the sagaRepository.
When I change this value to "sagas" for the sagaRepository, then everything falls properly into place !

Related

Job Stealing Configuration not working in Apache Ignite

I have the following configuration file
<bean abstract="true" id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="peerClassLoadingEnabled" value="true"/>
<property name="includeEventTypes">
<list>
<!--Task execution events-->
<util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_STARTED"/>
<util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FINISHED"/>
<util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FAILED"/>
</list>
</property>
<property name="metricsUpdateFrequency" value="10000"/>
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
<property name="addresses">
<list>
<!-- In distributed environment, replace with actual host IP address. -->
<value>127.0.0.1:47500..47509</value>
<value>127.0.0.1:48500..48509</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
<!-- Enabling the required Failover SPI. -->
<property name="failoverSpi">
<bean class="org.apache.ignite.spi.failover.jobstealing.JobStealingFailoverSpi"/>
</property>
<property name="collisionSpi">
<bean class="org.apache.ignite.spi.collision.jobstealing.JobStealingCollisionSpi">
<property name="activeJobsThreshold" value="50"/>
<property name="waitJobsThreshold" value="0"/>
<property name="messageExpireTime" value="1000"/>
<property name="maximumStealingAttempts" value="10"/>
<property name="stealingEnabled" value="true"/>
</bean>
</property>
</bean>
The closure gets executed over the server nodes in the grid as expected.
When we add a new node by executing the below command to the grid during the execution of closure
The existing nodes acknowledge the addition of the new node in the grid but the closure is not distributed to the newly added node.
Below is my closure implementation
#Override
public AccruedSimpleInterest apply(SimpleInterestParameter simpleInterestParameter) {
BigDecimal si = simpleInterestParameter.getPrincipal()
.multiply(new BigDecimal(simpleInterestParameter.getYears()))
.multiply(new BigDecimal(simpleInterestParameter.getRate())).divide(SimpleInterestClosure.HUNDRED);
System.out.println("Calculated SI for id=" + simpleInterestParameter.getId() + " SI=" + si.toPlainString());
return new AccruedSimpleInterest(si, simpleInterestParameter);
}
Below is the main class
public static void main(String... args) throws IgniteException, IOException {
Factory<SimpleInterestClosure> siClosureFactory = FactoryBuilder.factoryOf(new SimpleInterestClosure());
ClassPathResource ress = new ClassPathResource("example-ignite-poc.xml");
File file = new File(ress.getPath());
try (Ignite ignite = Ignition.start(file.getPath())) {
System.out.println("Started Ignite Cluster");
IgniteFuture<Collection<AccruedSimpleInterest>> igniteFuture = ignite.compute()
.applyAsync(siClosureFactory.create(), createParamCollection());
Collection<AccruedSimpleInterest> res = igniteFuture.get();
System.out.println(res.size());
}nter code here
As far as my understanding goes, Job Stealing SPI requires you to implement some additional APIs in order to work.
Please see this discussion on user list:
Some remarks about job stealing SPI:
1)You have some nodes that can proceed the tasks of some compute job.
2)Tasks will be executed in public thread pool by default:
https://apacheignite.readme.io/docs/thread-pools#section-public-pool
3)If some node thread pool is busy then some task of compute job can be
executed on other node.
In next cases it will not work:
1)In case if you choose specific node for your compute task
2)In case if you do affinity call (the same as above but node will be
choose by affinity mapping)

Number of connections and channels

I am new to Rabbitmq and Spring. I want to know how to manage the number of connections and channels.
In my architecture there are 2 queues where messages are published from single producer based on routing key on direct exchange. As per my understanding I would need a single connection with 2 channels which will be persistent and messages will be published through them. I assumed this is managed by Spring automatically. But a connection, consisting of single channel, is created every time a message is published.
- How do I manage the channels and connections? Is it the right approach to create a single channel for each queue in a connection? If the queue size increases to 10 then 10 channels should be used in a single connection?
Configuration File:
<bean id="connectionFactory" class="org.springframework.amqp.rabbit.connection.CachingConnectionFactory">
<property name="username" value="test"/>
<property name="password" value="test"/>
<property name="host" value="50.16.11.22"/>
<property name="port" value="5672"/>
</bean>
<bean id="publisher" class="com.test.code.Publisher">
<constructor-arg ref="amqpTemplate"></constructor-arg>
</bean>
<bean id="amqpTemplate" class="org.springframework.amqp.rabbit.core.RabbitTemplate">
<property name="connectionFactory" ref="connectionFactory"/>
<property name="mandatory" value="true"></property>
<property name="exchange" value="x.direct"></property>
</bean>
<rabbit:admin connection-factory="connectionFactory" />
<rabbit:queue name="q.queue1" />
<rabbit:queue name="q.queue2" />
<rabbit:direct-exchange name="x.direct">
<rabbit:bindings>
<rabbit:binding queue="q.queue1" key="key1" />
<rabbit:binding queue="q.queue2" key="key2" />
</rabbit:bindings>
</rabbit:direct-exchange>
</beans>
This is my Publisher class
public class Publisher {
public Publisher(RabbitTemplate rabbitTemplate) {
this.rabbitTemplate = rabbitTemplate;
}
public void messageToQueue1(JSONObject message) {
amqpTemplate.convertAndSend("key1", message.toString());
}
public void messageToQueue2(JSONObject message) {
amqpTemplate.convertAndSend("key2", message.toString());
}
}
But a connection, consisting of single channel, is created every time a message is published.
That is not true. There is also no dedicated channel for each routing key.
The CachingConnectionFactory maintains a single persistent connection (by default) and channels are cached.
The first publish creates a channel and puts it in the cache. The next publish gets it from the cache. Only if the cache is empty is a new channel created (and then you'll end up with 2 cached channels).
You'll only get as many channels as you need concurrently.

Spring - ActiveMQ - Durable Subscription - Close Connection and Resubscribe to get the offline messages

I want to implement a solution in Spring-JMS with activeMQ where I want to create a durable subscription to a topic. The purpose is that if a subscriber closes the subscription for a while and once again recreates the durablesubscription with same client id and subscription name, the subscriber should receive all the messages which were delivered during the time subscription was closed.
I want to implement the following logic mentioned in the ORACLE URL for durable subscriptions: https://docs.oracle.com/cd/E19798-01/821-1841/bncgd/index.html
But I am unable to perform this using spring-jms. As per the URL I need to get messageConsumer instance and call close() on that method to stop receiving message temporarily from the topic. But I am not sure how to get it.
Following is my configuration. Kindly let me know how to modify the configuration to perform this.
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:p="http://www.springframework.org/schema/p"
xmlns:jms="http://www.springframework.org/schema/jms"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/jms http://www.springframework.org/schema/jms/spring-jms.xsd">
<bean id="connectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory"
p:userName="admin"
p:password="admin"
p:brokerURL="tcp://127.0.0.1:61616"
primary="true"
></bean>
<bean id="jmsContainer" class="org.springframework.jms.listener.DefaultMessageListenerContainer" p:durableSubscriptionName="gxaa-durable1" p:clientId="gxaa-client1">
<property name="connectionFactory" ref="connectionFactory"/>
<property name="destination" ref="adiTopic"/>
<property name="messageListener" ref="adiListener"/>
</bean>
<bean id="configTemplate" class="org.springframework.jms.core.JmsTemplate"
p:connectionFactory-ref="connectionFactory"
p:defaultDestination-ref="adiTopic" primary="true"
p:pubSubDomain="true">
</bean>
<bean id="adiTopic" class="org.apache.activemq.command.ActiveMQTopic" p:physicalName="gcaa.adi.topic"></bean>
<bean id="adiListener" class="com.gcaa.asset.manager.impl.AdiListener"></bean>
why not calling DefaultMessageListenerContainer.stop(); to stop the container and consumers ?
you can inject jmsContainer to another bean and close it when you want and call start() later.
all messages sent to the broker when your durable consumer is offline will be stored until it reconnect.
to make subscription durables you need to add this to jmsContainer bean
<property name="subscriptionDurable" value="true" />
<property name="cacheLevel" value="1" />
you can add a subscriptionName or the class name of the specified message listener will be used.
You can add a clientID to the connectionFactory
<property name="clientID" value="${jms.clientId}" />
or use
<bean class="org.springframework.jms.connection.SingleConnectionFactory"
id="singleConnectionFactory">
<constructor-arg
ref="connectionFactory" />
<property name="reconnectOnException" value="true" />
<property name="clientId" value="${jms.clientId}" />
</bean>
and update jmsContainer
<bean id="jmsContainer"
class="org.springframework.jms.listener.DefaultMessageListenerContainer"
p:durableSubscriptionName="gxaa-durable1" p:clientId="gxaa-client1">
<property name="connectionFactory" ref="singleConnectionFactory" />
<property name="destination" ref="adiTopic" />
<property name="messageListener" ref="adiListener" />
<property name="subscriptionDurable" value="true" />
<property name="cacheLevel" value="1" />
</bean>
UPDATE :
if your adiListener implements org.springframework.jms.listener.SessionAwareMessageListener it have to define method onMessage(M message, Session session) and when you have the session you can call javax.jms.Session.unsubscribe(String subscriptionName)
subscriptionName is defined above and can be injected to this bean or the class name of the specified message listener can be used.

Fanout exchange behaving as Direct exchange in Spring AMQP

I am facing an issue while using RabbitMQ in the fanout exchange which due to some unknown reason is behaving like a direct exchange.
I am using a following binding and queue configuration
<bean id="testfanout"
class="com.test">
<constructor-arg name="exchange" ref="test" />
<constructor-arg name="routingKey" value="test" />
<constructor-arg name="queue" value="testQ" />
<constructor-arg name="template">
<bean class="org.springframework.amqp.rabbit.core.RabbitTemplate">
<constructor-arg ref="connectionFactory" />
</bean>
</constructor-arg>
<constructor-arg value="true"/>
</bean>
<rabbit:fanout-exchange name="test" id="test">
<rabbit:bindings>
<rabbit:binding queue="test"/>
</rabbit:bindings>
</rabbit:fanout-exchange>
Now we have a same code listening to same testQ on two different VM's but somehow message is send to one VM listener using the round robin algorithm
Sender code
channel = ...
RabbitTemplate template = null;
if(channel != null){
template = channel.getTemplate();
if(template != null){
template.setQueue(channel.getQueue());
template.setExchange(channel.getExchange().getName());
template.convertAndSend(channel.getRoutingKey(), txtMsg);
The routing key is ignored for a fanout exchange.
Are you sure it's actually a fanout exchange in rabbitmq? I don't see a RabbitAdmin in your configuration (which would attempt to declare the exchange and binding).
Look at your exchange in the RabbitMQ UI and check the type/bindings.

Use JAAS for LDAP password with Spring security

I have a Java EE web application which uses an LDAP authentication. I use Spring security to connect to my LDAP with the following code:
<bean id="ldapContextSource" class="com.myapp.security.authentication.MySecurityContextSource">
<constructor-arg index="0" value="${ldap.url}" />
<constructor-arg index="1" ref="userConnexion" />
</bean>
<security:authentication-manager alias="authenticationManager">
<security:authentication-provider ref="ldapAuthProvider" />
</security:authentication-manager>
<bean id="userConnexion" class="com.myapp.util.security.WebsphereCredentials">
<constructor-arg value="${ldap.authJndiAlias}" />
</bean>
<bean id="ldapAuthProvider" class="org.springframework.security.ldap.authentication.LdapAuthenticationProvider">
<constructor-arg>
<bean class="org.springframework.security.ldap.authentication.BindAuthenticator">
<constructor-arg ref="ldapContextSource" />
<property name="userSearch" ref="userSearch" />
</bean>
</constructor-arg>
<constructor-arg>
<bean class="com.myapp.security.authentication.MyAuthoritiesPopulator" >
<property name="userService" ref="userService" />
</bean>
</constructor-arg>
<property name="userDetailsContextMapper" ref="myUserDetailsContextMapper"/>
<property name="hideUserNotFoundExceptions" value="false" />
</bean>
Actually, my bean WebsphereCredentials uses a WebSphere private class WSMappingCallbackHandlerFactory as in this response : How to access authentication alias from EJB deployed to Websphere 6.1
We can see it in the official websphere documentation: http://pic.dhe.ibm.com/infocenter/wasinfo/v6r1/index.jsp?topic=%2Fcom.ibm.websphere.express.doc%2Finfo%2Fexp%2Fae%2Frsec_pluginj2c.html
But I don't want it because:
I think my application can access all JAAS logins in my WebSphere instance (not sure).
This class is defined in the HUGE IBM client library com.ibm.ws.admin.client-7.0.0.jar (42 Mo) => compilation slower, not present in my enterprise nexus
It's not portable, not standard
For information, I define the WebsphereCredentials constructor as this:
Map<String, String> map = new HashMap<String, String>();
map.put(Constants.MAPPING_ALIAS, this.jndiAlias);
Subject subject;
try {
CallbackHandler callbackHandler = WSMappingCallbackHandlerFactory.getInstance().getCallbackHandler(map, null);
LoginContext lc = new LoginContext("DefaultPrincipalMapping", callbackHandler);
lc.login();
subject = lc.getSubject();
} catch (NotImplementedException e) {
throw new EfritTechnicalException(EfritTechnicalExceptionEnum.LOGIN_CREDENTIAL_PROBLEM, e);
} catch (LoginException e) {
throw new EfritTechnicalException(EfritTechnicalExceptionEnum.LOGIN_CREDENTIAL_PROBLEM, e);
}
PasswordCredential cred = (PasswordCredential) subject.getPrivateCredentials().toArray()[0];
this.user = cred.getUserName();
this.password = String.valueOf(cred.getPassword());
Is there a way to use just Spring security and remove this dependency?
I have no idea how to combine http://static.springsource.org/spring-security/site/docs/3.1.x/reference/jaas.html and http://static.springsource.org/spring-security/site/docs/3.1.x/reference/ldap.html.
Maybe I must totally change my approach and use another way?
I assume your goal is to simply utilize the username / password that you configure in WebSphere to connect to the LDAP directory? If this is the case, you are not really trying to combine LDAP and JAAS based authentication. The JAAS support is really intended to be a way of using JAAS LoginModules to authenticate a user instead of using the LDAP based authentication.
If you are wanting to obtain the username and password without having a compile time dependency on WebSphere, you have a few options.
Eliminating Compile Time and Runtime Dependencies on WAS
One option is to configure the password in a different way. This could be as simple as using the password directly directly in the configuration file as shown in the Spring Security LDAP documentation:
<bean id="ldapContextSource"
class="org.springframework.security.ldap.DefaultSpringSecurityContextSource">
<constructor-arg value="ldap://monkeymachine:389/dc=springframework,dc=org"/>
<property name="userDn" value="cn=manager,dc=springframework,dc=org"/>
<property name="password" value="password"/>
</bean>
You could also configure the username password in JNDI. Another alternative is to use a .properties file with the Property. If you are wanting to ensure the password is secured, then you will probably want to encrypt the password using something like Jasypt.
Eliminating Compile Time dependencies and still configuring with WAS
If you need or want to use WebSphere's J2C support for storing the credentials, then you can do by injecting the CallbackHandler instance. For example, your WebsphereCredentials bean could be something like this:
try {
LoginContext lc = new LoginContext("DefaultPrincipalMapping", this.callbackHandler);
lc.login();
subject = lc.getSubject();
} catch (NotImplementedException e) {
throw new EfritTechnicalException(EfritTechnicalExceptionEnum.LOGIN_CREDENTIAL_PROBLEM, e);
} catch (LoginException e) {
throw new EfritTechnicalException(EfritTechnicalExceptionEnum.LOGIN_CREDENTIAL_PROBLEM, e);
}
PasswordCredential cred = (PasswordCredential) subject.getPrivateCredentials().toArray()[0];
this.user = cred.getUserName();
this.password = String.valueOf(cred.getPassword());
Your configuration would then look something like this:
<bean id="userConnexion" class="com.myapp.util.security.WebsphereCredentials">
<constructor-arg ref="wasCallbackHandler"/>
</bean>
<bean id="wasCallbackHandler"
factory-bean="wasCallbackFactory"
factory-method="getCallbackHandler">
<constructor-arg>
<map>
<entry
value="${ldap.authJndiAlias}">
<key>
<util:constant static-field="com.ibm.wsspi.security.auth.callback.Constants.MAPPING_ALIAS"/>
</key>
</entry>
</map>
</constructor-arg>
<constructor-arg>
<null />
</constructor-arg>
</bean>
<bean id="wasCallbackFactory"
class="com.ibm.wsspi.security.auth.callback.WSMappingCallbackHandlerFactory"
factory-method="getInstance" />
Disclaimer
CallbackHandler instances are not Thread safe and generally should not be used more than once. Thus it can be a bit risky injecting CallbackHandler instances as member variables. You may want to program in a check to ensure that the CallbackHandler only used one time.
Hybrid Approach
You could do a hybrid approach that always removes the compile time dependency and allows you to remove the runtime dependency in instances where you might not be running on WebSphere. This could be done by combining the two suggestions and using Spring Bean Definition Profiles to differentiate between running on WebSphere and a non-WebSphere machine.