I have an embedded ActiveMQ in my app which needs a DS. I was using Tomcat CP (for several years) and am trying to switch to Hikari. The startAsynch flag for the MQ broker is set to false.
While configuring Hikari, if i set autoCommit to false, the ActiveMQ broker hangs during startup.
However if i set autocommit to True, then there is no issue, and the broker comes up just fine.
On the other hand if i start ActiveMQ with the startAsynch flag to true, the broker comes up with no issues, even though autoCommit in Hikari is set to false.
Here is an interesting titbit - it appears with tomcat CP, the autocommit uses whatever the underlying driver uses - in my case its the oracle driver and it appears the default for the oracle driver is autoCommit is set to true.
ActiveMQ version is 5.16.0 and Hikari is 3.4.5
HikariCP Config is:
<bean id="dataSourceForBroker" class="com.zaxxer.hikari.HikariDataSource" destroy-method="close">
<property name="driverClassName" value="${driverClassName}"/>
<property name="jdbcUrl" value="${url}"/>
<property name="username" value="${dbusername}"/>
<property name="password" value="${password}"/>
<property name="maximumPoolSize" value="${dbSessionMaxActive}" />
<property name="autoCommit" value="false"/>
<property name="minimumIdle" value="10" />
<property name="maxLifetime" value="30000" />
<property name="connectionTimeout" value="60000" />
</bean>
Snippet that creates the broker is
public void ccreateBroker(){
BrokerService broker = new BrokerService();
broker.setBrokerId("myBroker");
broker.setBrokerName("myBroker");
broker.setPersistent(true);
broker.setUseJmx(true);
broker.setUseShutdownHook(true);
broker.setStartAsync(false);
broker.requestRestart();
broker.getManagementContext().setConnectorPort(9999);
broker.addShutdownHook(() -> {
brokerStarted = false;
logger.info("Active MQ broker was shutdown");
});
ManagementContext managementContext = new ManagementContext();
managementContext.setCreateConnector(false);
broker.setManagementContext(managementContext);
JDBCPersistenceAdapter jdbcPersistenceAdapter = new JDBCPersistenceAdapter();
jdbcPersistenceAdapter.setDataSource(dataSource);
LeaseDatabaseLocker leaseDatabaseLocker = new LeaseDatabaseLocker();
try {
broker.setPersistenceAdapter(jdbcPersistenceAdapter);
jdbcPersistenceAdapter.setLockKeepAlivePeriod(5000);
leaseDatabaseLocker.setLockAcquireSleepInterval(10000);
jdbcPersistenceAdapter.setLocker(leaseDatabaseLocker);
broker.addConnector("vm://hexgenBroker");
broker.start();
brokerStarted = true;
} catch (Exception e) {
throw new RuntimeException("Broker failed -", e);
}
}
Any pointers on why this happeinng? I am a bit lost here
thanks in advance
-anish
Related
When I start a remote compute job , call() Or affinityCall(). Remote server will create 6 threads, and these thread never exit. Just like the VisualVM shows below:
view VisualVM snapshot
thread name from "utility-#153%null%" to "marshaller-cache-#14i%null%", will never be ended.
If client runs over and over again, the number of threads on server node will be increased rapidly. As a result, server node run out of memory.
How can I close this thread when client closed.
May be I do not run client in the current way.
Client Code
String cacheKey = "jobIds";
String cname = "myCacheName";
ClusterGroup rmts = getIgnite().cluster().forRemotes();
IgniteCache<String, List<String>> cache = getIgnite().getOrCreateCache(cname);
List<String> jobList = cache.get(cacheKey);
Collection<String> res = ignite.compute(rmts).apply(
new IgniteClosure<String, String>() {
#Override
public String apply(String word) {
return word;
}
},
jobList
);
getIgnite().close();
System.out.println("ignite Closed");
if (res == null) {
System.out.println("Error: Result is null");
return;
}
res.forEach(s -> {
System.out.println(s);
});
System.out.println("Finished!");
getIgnite(), get the instance of Ignite.
public static Ignite getIgnite() {
if (ignite == null) {
System.out.println("RETURN INSTANCE ..........");
Ignition.setClientMode(true);
ignite = Ignition.start(confCache);
ignite.configuration().setDeploymentMode(DeploymentMode.CONTINUOUS);
}
return ignite;
}
Server config:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<!--
Alter configuration below as needed.
-->
<bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="peerClassLoadingEnabled" value="true"/>
<property name="peerClassLoadingMissedResourcesCacheSize" value="0"/>
<property name="publicThreadPoolSize" value="64"/>
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
<property name="addresses">
<list>
<value>172.22.1.72:47500..47509</value>
<value>172.22.1.100:47500..47509</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
<property name="cacheConfiguration">
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="cacheMode" value="PARTITIONED"/>
<property name="memoryMode" value="ONHEAP_TIERED"/>
<property name="backups" value="0"/>
<property name="offHeapMaxMemory" value="0"/>
<property name="swapEnabled" value="false"/>
</bean>
</property>
</bean>
</beans>
These thread pools are static and number of threads in them never depends on load (number of executed operations, jobs, etc.). Having said that, I'm don't think they are the reason of OOME, unless you somehow start a new node within the same JVM for each executed job.
I would also recommend to always reuse the existing node that is already started in a JVM. Starting a new one and closing it for each job is a bad practice.
Threads are created in thread pools, so you may set their size in IgniteConfiguration: setUtilityCachePoolSize(int) and setMarshallerCachePoolSize(int) for Ignite 1.5 and setMarshallerCacheThreadPoolSize(int) for Ignite 1.7, and others.
I am using Spring Integration AMQP 4.1.2, Spring Rabbit 1.4.3, Spring Amqp 1.4.3, amqp-client-3.6.5.jar to publish messages to RabbitMQ server 3.5.3
As part of negative testing, I am sending messages to Non Existing Exchange.
I have a negative acknowledgement handler configured using Spring Integration Amqp. This negative acknowledgement handler got invoked with the failed message and even this message contains the reason for negative acknowledgement.
Everything is perfect up to here.
I need to Retry the failed message again as part of requirement. So the negative acknowledgement handler retires to publish the same message again.
At this time, when the Java RabbitMQ client (amqp-client-3.6.5.jar) trying to issue the command "Channel.Open" to the RabbitMQ server.
But this call getting blocked indefinitely (AMQP Connection thread is indefinitely waiting on the Object BlockingValueOrException which is responsible to notify)
and the Java client is indefinitely waiting for the response to the command "Channel.Open". But I could see a new channel got created in RabbitMQ server using the admin console.
Why my "Channel.Open" call getting blocked? Is RabbitMQ server failed to send response to the command "Channel.Open"?
How to check the command requests and responses passed in between Java RabbitMQ client and RabbitMQ server? Do we have any plugins that need be installed in RabbitMQ server?
Please help me in this regard. Configuration information is below.
Spring Integration Amqp configuration that publishes messages and registers ack/nack, return handlers
<!-- AMQP/RMQ Publishing -->
<int-amqp:outbound-channel-adapter
default-delivery-mode="PERSISTENT" exchange-name="${prism.rabbitmq.exchange}"
routing-key-expression="headers['${prism.rabbitmq.message.header.routingKey}']" amqp-template="amqpTemplate"
mapped-request-headers="*" channel="outgoingRabbit"
confirm-ack-channel="successfullyPublishedChannel"
confirm-nack-channel="mailPublishingExceptionChannel"
confirm-correlation-expression="#this" lazy-connect="false" return-channel="mailPublishingExceptionChannel"/>
<!-- AMQP client connection factory -->
<bean id="amqpClientConnectionFactory" class="com.rabbitmq.client.ConnectionFactory">
<property name="uri"
value="amqp://guest:guest#127.0.0.1:5672" />
<property name="automaticRecoveryEnabled"
value="true" />
</bean>
<rabbit:connection-factory id="amqpConnectionFactory"
host="127.0.0.1" connection-factory="amqpClientConnectionFactory"
publisher-confirms="true" publisher-returns="true" channel-cache-size="5"/>
<rabbit:template id="amqpTemplate" connection-factory="amqpConnectionFactory"
exchange="${prism.rabbitmq.exchange}" retry-template="retryTemplate" mandatory="true"/>
<bean id="retryTemplate" class="org.springframework.retry.support.RetryTemplate">
<property name="retryPolicy">
<bean class="org.springframework.retry.policy.SimpleRetryPolicy">
<property name="maxAttempts" value="4" />
</bean>
</property>
<property name="backOffPolicy">
<bean class="org.springframework.retry.backoff.ExponentialBackOffPolicy">
<property name="initialInterval" value="1000" />
<property name="multiplier" value="5.0" />
<property name="maxInterval" value="60000" />
</bean>
</property>
</bean>
Negative Acknowledgement Handler configuration
<int:service-activator input-channel="mailPublishingExceptionChannel" ref="mailPublishingExceptionHandler" method="handleError" />
Negative Acknowledgement Handler class's handle method.
#Autowired
#Qualifier("outgoingRabbit")
private MessageChannel outgoingRabbit;
#Override
public void handleError(Message<?> genMessage) {
try {
// Retry !!
// Get the failed RMQ Message whose payload is JSON and has Message
// Headers as well.
Message failedRMQMessage = (Message) genMessage.getPayload();
MessageBuilder rmqMessageWithRetry = MessageBuilder.withPayload(failedRMQMessage.getPayload());
rmqMessageWithRetry.copyHeaders(failedRMQMessage.getHeaders());
new MessagingTemplate().send(outgoingRabbit, rmqMessageWithRetry.build()); --> this call again publishes the payload
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
We're trying to set up ActiveMQ 5.9.0 as a message broker using JMS topics, but we're having some issues with the consumption of the messages.
For testing purposes, we have a simple configuration of 1 topic, 1 event producer, and 1 consumer. We send 10 messages one after the other, but every time we run the application, 1-3 of these messages are not consumed! The other messages are consumed and proceesed fine.
We can see that all the messages we're published to the topic in the ActiveMQ managment console, but they never reach the consumer, even if we reastart the application (we can see that the numbers in the "Enqueue" and "Dequeue" columns are different).
EDIT: I should also mention that when using queues instead of topic, this problem does not occur.
Why is this happening? Could it have something to do with atomikos (which is the transaction manger)? Or maybe something else in the configuration? Any ideas/suggestions are welcome. :)
This is the ActiveMQ/JMS spring configuration:
<bean id="connectionFactory" class="com.atomikos.jms.AtomikosConnectionFactoryBean"
init-method="init" destroy-method="close">
<property name="uniqueResourceName" value="amq" />
<property name="xaConnectionFactory">
<bean class="org.apache.activemq.spring.ActiveMQXAConnectionFactory"
p:brokerURL="${activemq_url}" />
</property>
<property name="maxPoolSize" value="10" />
<property name="localTransactionMode" value="false" />
</bean>
<bean id="cachedConnectionFactory"
class="org.springframework.jms.connection.CachingConnectionFactory">
<property name="targetConnectionFactory" ref="connectionFactory" />
</bean>
<!-- A JmsTemplate instance that uses the cached connection and destination -->
<bean id="jmsTemplate" class="org.springframework.jms.core.JmsTemplate">
<property name="connectionFactory" ref="cachedConnectionFactory" />
<property name="sessionTransacted" value="true" />
<property name="pubSubDomain" value="true"/>
</bean>
<bean id="testTopic" class="org.apache.activemq.command.ActiveMQTopic">
<constructor-arg value="test.topic" />
</bean>
<!-- The Spring message listener container configuration -->
<jms:listener-container destination-type="topic"
connection-factory="connectionFactory" transaction-manager="transactionManager"
acknowledge="transacted" concurrency="1">
<jms:listener destination="test.topic" ref="testReceiver"
method="receive" />
</jms:listener-container>
The producer:
#Component("producer")
public class EventProducer {
#Autowired
private JmsTemplate jmsTemplate;
#Transactional
public void produceEvent(String message) {
this.jmsTemplate.convertAndSend("test.topic", message);
}
}
The consumer:
#Component("testReceiver")
public class EventListener {
#Transactional
public void receive(String message) {
System.out.println(message);
}
}
The test:
#Autowired
private EventProducer eventProducer;
public void testMessages() {
for (int i = 1; i <= 10; i++) {
this.eventProducer.produceEvent("message" + i);
}
That's the nature of JMS topics - only current subscribers receive messages by default. You have a race condition and are sending messages before the consumer has established its subscription, after the container is started. This is a common mistake with unit/integration tests with topics where you are sending and receiving in the same application.
With newer versions of Spring, there is a method you can poll to wait until the subscriber is established (since 3.1, I think). Or, you can just wait a little while before starting to send, or you can make your subscriptions durable.
I have a Java EE web application which uses an LDAP authentication. I use Spring security to connect to my LDAP with the following code:
<bean id="ldapContextSource" class="com.myapp.security.authentication.MySecurityContextSource">
<constructor-arg index="0" value="${ldap.url}" />
<constructor-arg index="1" ref="userConnexion" />
</bean>
<security:authentication-manager alias="authenticationManager">
<security:authentication-provider ref="ldapAuthProvider" />
</security:authentication-manager>
<bean id="userConnexion" class="com.myapp.util.security.WebsphereCredentials">
<constructor-arg value="${ldap.authJndiAlias}" />
</bean>
<bean id="ldapAuthProvider" class="org.springframework.security.ldap.authentication.LdapAuthenticationProvider">
<constructor-arg>
<bean class="org.springframework.security.ldap.authentication.BindAuthenticator">
<constructor-arg ref="ldapContextSource" />
<property name="userSearch" ref="userSearch" />
</bean>
</constructor-arg>
<constructor-arg>
<bean class="com.myapp.security.authentication.MyAuthoritiesPopulator" >
<property name="userService" ref="userService" />
</bean>
</constructor-arg>
<property name="userDetailsContextMapper" ref="myUserDetailsContextMapper"/>
<property name="hideUserNotFoundExceptions" value="false" />
</bean>
Actually, my bean WebsphereCredentials uses a WebSphere private class WSMappingCallbackHandlerFactory as in this response : How to access authentication alias from EJB deployed to Websphere 6.1
We can see it in the official websphere documentation: http://pic.dhe.ibm.com/infocenter/wasinfo/v6r1/index.jsp?topic=%2Fcom.ibm.websphere.express.doc%2Finfo%2Fexp%2Fae%2Frsec_pluginj2c.html
But I don't want it because:
I think my application can access all JAAS logins in my WebSphere instance (not sure).
This class is defined in the HUGE IBM client library com.ibm.ws.admin.client-7.0.0.jar (42 Mo) => compilation slower, not present in my enterprise nexus
It's not portable, not standard
For information, I define the WebsphereCredentials constructor as this:
Map<String, String> map = new HashMap<String, String>();
map.put(Constants.MAPPING_ALIAS, this.jndiAlias);
Subject subject;
try {
CallbackHandler callbackHandler = WSMappingCallbackHandlerFactory.getInstance().getCallbackHandler(map, null);
LoginContext lc = new LoginContext("DefaultPrincipalMapping", callbackHandler);
lc.login();
subject = lc.getSubject();
} catch (NotImplementedException e) {
throw new EfritTechnicalException(EfritTechnicalExceptionEnum.LOGIN_CREDENTIAL_PROBLEM, e);
} catch (LoginException e) {
throw new EfritTechnicalException(EfritTechnicalExceptionEnum.LOGIN_CREDENTIAL_PROBLEM, e);
}
PasswordCredential cred = (PasswordCredential) subject.getPrivateCredentials().toArray()[0];
this.user = cred.getUserName();
this.password = String.valueOf(cred.getPassword());
Is there a way to use just Spring security and remove this dependency?
I have no idea how to combine http://static.springsource.org/spring-security/site/docs/3.1.x/reference/jaas.html and http://static.springsource.org/spring-security/site/docs/3.1.x/reference/ldap.html.
Maybe I must totally change my approach and use another way?
I assume your goal is to simply utilize the username / password that you configure in WebSphere to connect to the LDAP directory? If this is the case, you are not really trying to combine LDAP and JAAS based authentication. The JAAS support is really intended to be a way of using JAAS LoginModules to authenticate a user instead of using the LDAP based authentication.
If you are wanting to obtain the username and password without having a compile time dependency on WebSphere, you have a few options.
Eliminating Compile Time and Runtime Dependencies on WAS
One option is to configure the password in a different way. This could be as simple as using the password directly directly in the configuration file as shown in the Spring Security LDAP documentation:
<bean id="ldapContextSource"
class="org.springframework.security.ldap.DefaultSpringSecurityContextSource">
<constructor-arg value="ldap://monkeymachine:389/dc=springframework,dc=org"/>
<property name="userDn" value="cn=manager,dc=springframework,dc=org"/>
<property name="password" value="password"/>
</bean>
You could also configure the username password in JNDI. Another alternative is to use a .properties file with the Property. If you are wanting to ensure the password is secured, then you will probably want to encrypt the password using something like Jasypt.
Eliminating Compile Time dependencies and still configuring with WAS
If you need or want to use WebSphere's J2C support for storing the credentials, then you can do by injecting the CallbackHandler instance. For example, your WebsphereCredentials bean could be something like this:
try {
LoginContext lc = new LoginContext("DefaultPrincipalMapping", this.callbackHandler);
lc.login();
subject = lc.getSubject();
} catch (NotImplementedException e) {
throw new EfritTechnicalException(EfritTechnicalExceptionEnum.LOGIN_CREDENTIAL_PROBLEM, e);
} catch (LoginException e) {
throw new EfritTechnicalException(EfritTechnicalExceptionEnum.LOGIN_CREDENTIAL_PROBLEM, e);
}
PasswordCredential cred = (PasswordCredential) subject.getPrivateCredentials().toArray()[0];
this.user = cred.getUserName();
this.password = String.valueOf(cred.getPassword());
Your configuration would then look something like this:
<bean id="userConnexion" class="com.myapp.util.security.WebsphereCredentials">
<constructor-arg ref="wasCallbackHandler"/>
</bean>
<bean id="wasCallbackHandler"
factory-bean="wasCallbackFactory"
factory-method="getCallbackHandler">
<constructor-arg>
<map>
<entry
value="${ldap.authJndiAlias}">
<key>
<util:constant static-field="com.ibm.wsspi.security.auth.callback.Constants.MAPPING_ALIAS"/>
</key>
</entry>
</map>
</constructor-arg>
<constructor-arg>
<null />
</constructor-arg>
</bean>
<bean id="wasCallbackFactory"
class="com.ibm.wsspi.security.auth.callback.WSMappingCallbackHandlerFactory"
factory-method="getInstance" />
Disclaimer
CallbackHandler instances are not Thread safe and generally should not be used more than once. Thus it can be a bit risky injecting CallbackHandler instances as member variables. You may want to program in a check to ensure that the CallbackHandler only used one time.
Hybrid Approach
You could do a hybrid approach that always removes the compile time dependency and allows you to remove the runtime dependency in instances where you might not be running on WebSphere. This could be done by combining the two suggestions and using Spring Bean Definition Profiles to differentiate between running on WebSphere and a non-WebSphere machine.
We use AMQ broker 5.5 and Spring 3.0 for configuring connection factory and other stuffs.
The connection factory we are using is PooledConnectionFactory and a part of my config looks like this:
<bean id="jmsFactory" class="org.apache.activemq.pool.PooledConnectionFactory" destroy-method="stop">
<property name="connectionFactory">
<bean class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="some_url"/>
</bean>
</property>
</bean>
<!-- Spring JMS Template -->
<bean id="jmsTemplate"
class="org.springframework.jms.core.JmsTemplate">
<property name="connectionFactory">
<ref local="jmsFactory" />
</property>
<property name="explicitQosEnabled" value="true"/>
<property name="timeToLive" value="86400000"/>
</bean
A few days back our broker crashed and kept restarting with this error:
java.lang.OutOfMemoryError: requested
369384 bytes for Chunk::new. Out of swap space?
At that point of time, from jconsole, I could not find anything unusual with the broker, except that one of our client application
which talks with the server (via broker) by sending and listening to messages every minute had created ~3000 connections (saw it on jconsole).
Once we had shut it down, everything was back to normal.
So, to avoid this I tried closing the connection in finally block doing something like this.
try {
connection = myJmsTemplate.getConnectionFactory().createConnection();
session = connection.createSession(false, 1);
String messageSelector = "JMSCorrelationID='" + correlationId + "'";
responseConsumer = session.createConsumer(receiveDestination, messageSelector);
LOG.info("Starting connection");
connection.start();
myJmsTemplate.send(sendDestination, new SimpleTextMessageCreator(
message, receiveDestination, correlationId));
LOG.info("Waiting for message with " + messageSelector + " for " + DEFAULT_TIMEOUT + " ms");
TextMessage responseMessage = (TextMessage) responseConsumer.receive(DEFAULT_TIMEOUT);
}
catch (Someexception e) {do something}
finally {
responseConsumer.close();
session.close();
connection.close();
}
But even then I can see the connections floating around in jconsole and are only lost if the client app which publishes the messages is brought down.
Can someone please help me understand what's happening here and how I can close the connections after each pub sub cycle.
Thank you in advance,
Hari
False alarm. There was another piece of code that was leaving the connection open. Closing it solved the issue.