ConcurrentConsumers not created using DefaultMessageListenerContainer but using maximumActiveSessionsPerConnection on PooledConnectionFactory - activemq

I have following Java class. When used with CachingConnectionFactory it creates configured number of ConcurrentConsumers set on DefaultMessageListenerContainer. However if PooledConnectionFactory is used instead of CachingConnectionFactory, it just creates concurrentConsumers equals to maximumActiveSessionPerConnection set on PooledConnectionFactory instead of number of concurrentConsumers set on DefaultMessageListenerContainer.
How can I make sure the DefaultMessageListenerContainer uses multiple connections/Sessions provided by PooledConnectionFactory and create configured number of concurrentConsumer provided to DefaultMessageListenerContainer. Below is the simple example to check the same.
import javax.jms.Session;
import org.apache.activemq.ActiveMQConnectionFactory;
import org.apache.activemq.command.ActiveMQQueue;
import org.apache.activemq.jms.pool.PooledConnectionFactory;
import org.springframework.jms.listener.DefaultMessageListenerContainer;
public class ActiveMQMainTest {
public static void main(String[] args) {
String queueUrl = "tcp://localhost:61616";
ActiveMQQueue queue = new ActiveMQQueue("request.queue");
final ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory(queueUrl);
PooledConnectionFactory pooledConnectionFactory = new PooledConnectionFactory();
pooledConnectionFactory.setConnectionFactory(connectionFactory);
pooledConnectionFactory.setCreateConnectionOnStartup(false);
pooledConnectionFactory.setMaxConnections(5);
pooledConnectionFactory.setMaximumActiveSessionPerConnection(100);
pooledConnectionFactory.start();
// CachingConnectionFactory pooledConnectionFactory = new CachingConnectionFactory(connectionFactory);
DefaultMessageListenerContainer defaultMessageListenerContainer = new DefaultMessageListenerContainer();
defaultMessageListenerContainer.setConnectionFactory(pooledConnectionFactory);
defaultMessageListenerContainer.setDestination(queue);
defaultMessageListenerContainer.setSessionAcknowledgeMode(Session.AUTO_ACKNOWLEDGE);
defaultMessageListenerContainer.setConcurrentConsumers(5);
defaultMessageListenerContainer.setMaxConcurrentConsumers(5 * 2);
defaultMessageListenerContainer.setCacheLevel(DefaultMessageListenerContainer.CACHE_NONE);
defaultMessageListenerContainer.setSessionTransacted(true);
JmsMessageListener messageListener = new JmsMessageListener();
defaultMessageListenerContainer.setMessageListener(messageListener);
defaultMessageListenerContainer.afterPropertiesSet();
defaultMessageListenerContainer.start();
try {
Thread.sleep(1000 * 60 * 10);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}

The DMLC uses a shared connection by default (when there's no transaction manager). It can be disabled using:
dmlc.setCacheLevel(DefaultMessageListenerContainer.CACHE_NONE);
You should also normally have setSessionTransacted(true) with the DMLC, to avoid the possibility of losing messages (with the DMLC, messages are ack'd before the listener is invoked), using local transactions, the ack won't go to the broker until the listener exits normally.

Related

Register Hibernate 5 Event Listeners

I am working on a legacy non-Spring application, and it is being migrated from Hibernate 3 to Hibernate 5.6.0.Final (latest at this time). I have generally never used Hibernate Event Listeners in my work, so this is quite new to me, and I am studying these in Hibernate 5.
Currently in some test class we have defined the code this way for Hibernate 3:
protected static Configuration createSecuredDatabaseConfig() {
Configuration config = createUnrestrictedDatabaseConfig();
config.setListener("pre-insert", "com.app.server.services.db.eventlisteners.MySecurityHibernateEventListener");
config.setListener("pre-update", "com.app.server.services.db.eventlisteners.MySecurityHibernateEventListener");
config.setListener("pre-delete", "com.app.server.services.db.eventlisteners.MySecurityHibernateEventListener");
config.setListener("pre-load", "com.app.server.services.db.eventlisteners.EkoSecurityHibernateEventListener");
return config;
}
This is obviously no longer valid, and I believe I need to create a Hibernate Integrator, which I have done.
public class MyEventListenerIntegrator implements Integrator {
#Override
public void integrate(Metadata metadata, SessionFactoryImplementor sessionFactory,
SessionFactoryServiceRegistry serviceRegistry) {
EventListenerRegistry eventListenerRegistry = serviceRegistry.getService(EventListenerRegistry.class);
eventListenerRegistry.getEventListenerGroup(EventType.PRE_INSERT).appendListener(new MySecurityHibernateEventListener());
eventListenerRegistry.getEventListenerGroup(EventType.PRE_UPDATE).appendListener(new MySecurityHibernateEventListener());
eventListenerRegistry.getEventListenerGroup(EventType.PRE_DELETE).appendListener(new MySecurityHibernateEventListener());
eventListenerRegistry.getEventListenerGroup(EventType.PRE_LOAD).appendListener(new MySecurityHibernateEventListener());
}
So, now I believe the next step is to add this to the session via the registry builder. I am using this website to help me:
https://www.boraji.com/hibernate-5-event-listener-example
Because we were using older Hibernate 3, we had code to create our session factory as follows:
protected static SessionFactory buildSessionFactory(Database db)
{
if (db == null) {
throw new NullPointerException("Database specifier cannot be null");
}
try {
Configuration config = createSessionFactoryConfiguration(db);
String url = config.getProperty("connection.url");
String user = config.getProperty("connection.username");
String password = config.getProperty("connection.password");
try {
String dbDriver = config.getProperty("hibernate.connection.driver_class");
Class.forName(dbDriver);
Connection conn = DriverManager.getConnection(url, user, password);
}
catch (SQLException error) {
logger.info("Didn't find driver, on QA or production, so it's okay to assume we have DB connection");
error.printStackTrace();
}
SessionFactory sessionFactory = config.buildSessionFactory();
sessionFactoryConfigs.put(sessionFactory, config); // Cannot recover config from factory instance, must be stored.
return sessionFactory;
}
catch (Throwable ex) {
// Make sure you log the exception, as it might be swallowed
logger.error("Initial SessionFactory creation failed.", ex);
throw new ExceptionInInitializerError(ex);
}
}
The link that I referred to above has a much different way of creating the sessionfactory. So, I'll be testing that out to see if it works in our app.
Without Spring handling our sessions and transactions, in this app it is coded by hand the way it was done before Spring, and I haven't seen that kind of code in years.
I solved this issue with the help from the link I provided above. However, I didn't copy exactly what they did, but some of it helped. My solution is as follows:
protected static SessionFactory createSecuredDatabaseConfig() {
Configuration config = createUnrestrictedDatabaseConfig();
BootstrapServiceRegistry bootstrapRegistry =
new BootstrapServiceRegistryBuilder()
.applyIntegrator(new EkoEventListenerIntegrator())
.build();
ServiceRegistry serviceRegistry = new StandardServiceRegistryBuilder(bootstrapRegistry).applySettings(config.getProperties()).build();
SessionFactory sessionFactory = config.buildSessionFactory(serviceRegistry);
return sessionFactory;
}
This was it. I tried multiple different ways to register the events without the BootstrapServiceRegistry, but none of those worked. I did have to create the integrator. What I did NOT include was the following:
MetadataSources sources = new MetadataSources(serviceRegistry )
.addPackage("com.myproject.server.model");
Metadata metadata = sources.getMetadataBuilder().build();
// did not create the sessionFactory this way
sessionFactory = metadata.getSessionFactoryBuilder().build();
If I had gone further and use this method to create the sessionFactory, then all of my queries would have been complaining about not being able to find the parameterName, which is something else.
The Hibernate Integrator and this method to create the sessionFactory is all for the unit tests. Without registering these events, one unit test would fail, and now it doesn't. So, this solves my problem for now.

How to share the camel context between 2 different applications or war's

I have created 2 different application and started the camel context in one of them. How do I use this already started context in the second application ?
I tried getting the context by using lookUpByname() and binding camel context with jndi context but could on load the existing context.
Also tried by setting NameStrategy in context in application 1 and getting the same in application 2 but looks like camel auto generates name and prefix in DefaultCamelContextNameStrategy.
code snippet:
Application 1 :
public static void main(String[] args)
{
CamelContext ctx = new DefaultCamelContext();
String camelContextId= "sample";
ctx.setNameStrategy(new DefaultCamelContextNameStrategy(
camelContextId));
ctx.start();
}
Application 2:
public static void main(String[] args)
{
sampleRouter testobj = new sampleRouter();
testobj.test();
}
public class sampleRouter extends RouteBuilder
{
public static CamelContext camelContext;
public void test()
try
{
camelContext = getContext();
try {
camelContext.stop();
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
Please guide me to get the already started context in different applications as I want to avoid creating a new context every time.
Why do you want to avoid having multiple CamelContexts? What goal are you trying to accomplish?
Without a clear requirement it's not easy to help you, however I'll try and suggest a couple of ideas.
Looking at your code you are using two different JVMs, since you have 2 main methods.
If your applications run in different JVMs, use a JMS Message Broker like ActiveMQ as communication layer.
If you deploy 2 wars / applications in the same JVM, you can use two CamelContexts and have them communicate through VM endpoints, like seda-vm and direct-vm.

Send MessageProperties [priority=anyInteger] while publishing message in RabbitMQ

we are using rabbit MQ and Spring Integration in our project. Every Message has a deliver mode, header, properties, and payload part.
We want to add properties i.e) priority with value 2(any integer) , payload with "test message 3" and publish the message to the queue named OES. please see screen shot.
How to add the messageproperties i.e) priority =2(or any value) in the below outbound-channel-adapter(Spring Integration). I know we can add "headers" by adding into "mapped-request-headers" but i would like to add the properties. There are no properties defined for the MessageProperties in "outbound-channel-adapter". Is there a way to overcome this issue.
We have no issues with payload, it is going already. we want to add only the MessageProperties with priority=2(any value). how to add that in the outbound-channel-adapter(no need of hardcoding, should be generic)?
<!-- the mapped-request-headers should be symmetric with
the list on the consumer side defined in consumerbeans.consumerHeaderMapper() -->
<int-amqp:outbound-channel-adapter id="publishingAmqpAdapter"
channel="producer-processed-event-channel"
amqp-template="amqpPublishingTemplate"
exchange-name="events_forwarding_exchange"
routing-key-expression="headers['routing-path']"
mapped-request-headers="X-CallerIdentity,routing-path,content-type,route_to*,event-type,compression-state,STANDARD_REQUEST_HEADERS"
/>
Other configuration:
<!-- chain routes and transforms the ApplicationEvent into a json string -->
<int:chain id="routingAndTransforming"
input-channel="producer-inbound-event-channel"
output-channel="producer-routed-event-channel">
<int:transformer ref="outboundMessageTracker"/>
<int:transformer ref="messagePropertiesTransformer"/>
<int:transformer ref="eventRouter"/>
<int:transformer ref="eventToJsonTransformer"/>
</int:chain>
<int:transformer id="messagePayloadCompressor"
input-channel="compress-message-payload"
output-channel="producer-processed-event-channel"
ref="payloadCompressor"/>
#Configuration("amqpProducerBeans")
#ImportResource(value = "classpath:com/apple/store/platform/events/si/event-producer-flow.xml")
public class AmqpProducerBeans {
#Bean(name = { "amqpPublishingTemplate" })
public AmqpTemplate amqpTemplate() {
logger.debug("creating amqp publishing template");
RabbitTemplate rabbitTemplate = new RabbitTemplate(producerConnectionFactory());
SimpleMessageConverter converter = new SimpleMessageConverter();
// following needed for retry logic
converter.setCreateMessageIds(true);
rabbitTemplate.setMessageConverter(converter);
return rabbitTemplate;
}
/*Other code commented */
}
Other Code:
import org.springframework.integration.Message;
import org.springframework.integration.annotation.Transformer;
import org.springframework.integration.message.GenericMessage;
public class PayloadCompressor {
#Transformer
public Message<byte[]> compress(Message<String> message){
/* some code commented */
Map<String, Object> headers = new HashMap<String, Object>();
headers.putAll(message.getHeaders());
headers.remove("compression-state");
headers.put("compression-state", CompressionState.COMPRESSED);
Message<byte[]> compressedMessage = new GenericMessage<byte[]>(compressedPayload, headers);
return compressedMessage;
}
If we are not using spring integration, then we can use channel.basicPublish below way and send the MessageProperties.
ConnectionFactory factory = new ConnectionFactory();
factory.setVirtualHost("/");
factory.setHost("10.102.175.30");
factory.setUsername("rahul");
factory.setPassword("rahul");
factory.setPort(5672);
Connection connection = factory.newConnection();
System.out.println("got connection "+connection);
Channel channel = connection.createChannel();
MessageProperties msgproperties= new MessageProperties() ;
MessageProperties.BASIC.setPriority(3);
// set Messageproperties with priority
    String exchangeName = "HeaderExchange";
      String routingKey = "testkey";
      //routingkey
      byte[] messageBodyBytes = "Message having priority value 3".getBytes();
      channel.basicPublish(exchangeName,
                           routingKey,
                           true,
                           msgproperties.BASIC,
                           messageBodyBytes);
Please let me know if you need more details.
Properties are already mapped automatically - see the header mapper.
Simply use a <header-enricher/> to set the appropriate header and it will be mapped to the correct property. In the case of priority, the constant is here for the amqp-specific header constants, see here.

TCP Connection over a secure ssh connection

I am trying to use JSCH to connect to a remote server and then from that server open a telnet like session over a tcp/ip port. Say connect to server A, and once connected issue a tcp connection to server B over another port. In my webserver logs I see a GET / logged but not GET /foo as I would expect. ANything I m missing here? (I do not need to use Port forwarding since the remote port is accessible to the system I am connected to)
package com.tekmor;
import com.jcraft.jsch.*;
import java.io.BufferedReader;
.
.
public class Siranga {
public static void main(String[] args){
Siranga t=new Siranga();
try{
t.go();
} catch(Exception ex){
ex.printStackTrace();
}
}
public void go() throws Exception{
String host="hostXXX.com";
String user="USER";
String password="PASS";
int port=22;
Properties config = new Properties();
config.put("StrictHostKeyChecking", "no");
String remoteHost="hostYYY.com";
int remotePort=80;
try {
JSch jsch=new JSch();
Session session=jsch.getSession(user, host, port);
session.setPassword(password);
session.setConfig(config);
session.connect();
Channel channel=session.openChannel("direct-tcpip");
((ChannelDirectTCPIP)channel).setHost(remoteHost);
((ChannelDirectTCPIP)channel).setPort(remotePort);
String cmd = "GET /foo";
InputStream in = channel.getInputStream();
OutputStream out = channel.getOutputStream();
channel.connect(10000);
byte[] bytes = cmd.getBytes();
InputStream is = new ByteArrayInputStream(cmd.getBytes("UTF-8"));
int numRead;
while ( (numRead = is.read(bytes) ) >= 0) {
out.write(bytes, 0, numRead);
System.out.println(numRead);
}
out.flush();
channel.disconnect();
session.disconnect();
System.out.println("foo");
}
catch (Exception e){
e.printStackTrace();
}
}
}
Read your HTTP specification again. The request header should end with an empty line. So assuming you have no more header lines, you should at least have to line breaks at the end. (Line break here means a CRLF combination.)
Also, the request line should contain the HTTP version identifier after the URL.
So try this change to your program:
String command = "GET /foo HTTP/1.0\r\n\r\n";
As a hint: Instead of manually piping data from your ByteArrayInputStream to the channel's output stream, you could use the setInputStream method. Also, don't forget to read the result from the channel's input stream.

Using JNDI to access a DataSource (Tomcat 6)

I have been trying to set up a Database Connection Pool for my test webapp just to learn how it's done really. I have managed to get a DataSource object connected to my database which supplies me with Connection objects now, so that's good.
I must admit I don't really know exactly how it's working. I wrote some test code to see if I could figure out how the InitialContext object is working:
package twittersearch.web;
import javax.servlet.*;
import javax.servlet.http.*;
import java.io.*;
import java.util.*;
import java.sql.*;
import javax.sql.*;
import javax.naming.*;
import twittersearch.model.*;
public class ContextTest extends HttpServlet {
public void doGet(HttpServletRequest request,
HttpServletResponse response)
throws IOException, ServletException {
Context ctx = null;
Context env = null;
try {
ctx = new InitialContext();
Hashtable<?, ?> h = ctx.getEnvironment();
Enumeration<?> keyEn = h.keys();
while(keyEn.hasMoreElements()) {
Object o = keyEn.nextElement();
System.out.println(o);
}
Enumeration<?> valEn = h.elements();
while(valEn.hasMoreElements()) {
Object o = valEn.nextElement();
System.out.println(o);
}
env = (Context)ctx.lookup("java:comp/env");
h = env.getEnvironment();
Enumeration<?> keys = h.keys();
Enumeration<?> values = h.elements();
System.out.println("Keys:");
while(keys.hasMoreElements()) {
System.out.println(keys.nextElement());
}
System.out.println("Values:");
while(values.hasMoreElements()) {
System.out.println(values.nextElement());
}
Collection<?> col = h.values();
for(Object o : col) {
System.out.println(o);
}
DataSource dataSource = (DataSource)env.lookup("jdbc/twittersearchdb");
Connection conn = dataSource.getConnection();
if(conn instanceof Connection) {
System.out.println("Have a connection from the pool");
}
} catch(Exception e) {
e.printStackTrace();
}
}
}
This gives me output of:
java.naming.factory.initial
java.naming.factory.url.pkgs
org.apache.naming.java.javaURLContextFactory
org.apache.naming
Have a connection from the pool
Keys:
Values:
Have a connection from the pool
What I don't understand
I have got the InitialContext object which, as I understand it, I should be able to get a Hashtable from with keys and values of all the bindings for that context. As the first four lines of the output show, there were only two bindings.Yet I am able to use ctx.lookup("java:comp/env") to get another context that has bindings for Resources for my webapp. There was no "java:comp/env" in the keys from the test output from the InitialContext object. Where did that come from?
Also as you can see I tried to printout the keys and values from the java:comp/env context and got no output and yet I am able to use env.lookup("jdbc/twittersearchdb") which gets me the DataSource that I have specified in my context.xml. Why do I have no output for the bindings for the "java:comp/env" context?
Can I just confirm that as I have specified a Resource element in my context.xml, the container is creating a DataSource onject on deployment of the webapp and the whole Context / InitialContext thing is just a way of using JNDI to access the DataSource object? And if that's the case, why is JNDI used when it seems easier to me to create a DataSource in an implementation of ServletContextListener and have the datasource as a ServletContext attribute?
Does the DataSource object actually manage the ConnectionPool or is that the Container and so is the DataSource object just a way of describing the connection?
How do we access the container directly? What is the object that acctually represents the container? Is it ServletContext? I'm just trying to find out what the container can do for me.
Apologies for the length of this post. I really want to clear up these issues because I'm sure all this stuff is used in every webapp so I need to have it sorted.
Many thanks in advance
Joe