I've just started with Java ee 7, and here is something I couldnt get the idea of how it magically works.
I follow the example from the book Beginning Java EE 7 by Antonio Goncalves. I managed to compile and deploye the code of chapter 13 (about JMS) without any problem. Messages are sent and received as expected, but that make me confused.
The source code is composite of a consumer class, a producer class, a POJO and a MDB class.
here is the consumer:
public class OrderConsumer {
public static void main(String[] args) throws NamingException {
// Gets the JNDI context
Context jndiContext = new InitialContext();
// Looks up the administered objects
ConnectionFactory connectionFactory = (ConnectionFactory) jndiContext.lookup("jms/javaee7/ConnectionFactory");
Destination topic = (Destination) jndiContext.lookup("jms/javaee7/Topic");
// Loops to receive the messages
System.out.println("\nInfinite loop. Waiting for a message...");
try (JMSContext jmsContext = connectionFactory.createContext()) {
while (true) {
OrderDTO order = jmsContext.createConsumer(topic).receiveBody(OrderDTO.class);
System.out.println("Order received: " + order);
}
}
}
}
the producer:
public class OrderProducer {
public static void main(String[] args) throws NamingException {
if (args.length != 1) {
System.out.println("usage : enter an amount");
System.exit(0);
}
System.out.println("Sending message with amount = " + args[0]);
// Creates an orderDto with a total amount parameter
Float totalAmount = Float.valueOf(args[0]);
OrderDTO order = new OrderDTO(1234l, new Date(), "Serge Gainsbourg", totalAmount);
// Gets the JNDI context
Context jndiContext = new InitialContext();
// Looks up the administered objects
ConnectionFactory connectionFactory = (ConnectionFactory) jndiContext.lookup("jms/javaee7/ConnectionFactory");
Destination topic = (Destination) jndiContext.lookup("jms/javaee7/Topic");
try (JMSContext jmsContext = connectionFactory.createContext()) {
// Sends an object message to the topic
jmsContext.createProducer().setProperty("orderAmount", totalAmount).send(topic, order);
System.out.println("\nOrder sent : " + order.toString());
}
}
}
the MDB :
#MessageDriven(mappedName = "jms/javaee7/Topic", activationConfig = {
#ActivationConfigProperty(propertyName = "acknowledgeMode", propertyValue = "Auto-acknowledge"),
#ActivationConfigProperty(propertyName = "messageSelector", propertyValue = "orderAmount > 1000")
})
public class ExpensiveOrderMDB implements MessageListener {
public void onMessage(Message message) {
try {
OrderDTO order = message.getBody(OrderDTO.class);
System.out.println("Expensive order received: " + order.toString());
} catch (JMSException e) {
e.printStackTrace();
}
}
}
Content of msg encapsulated in a POJO object which implements Serializable interface
ExpensiveOrderMDB and the POJO is packaged in a .jar file and deploy in glassfish server running locally. Connection and desitination resouces are created by asadmin.
Question is: How can the consumer and producer know that the connection and destination are available on local glassfish server for it to make a connection and send/receive msg? (The lines that create connection and destination say nothing about local glassfish server)
Probably there is a jndi.properties file in which the connection to the glassfish is defined
Related
I connect RabbitMQ with sprin cloud config:
#Bean
public ConnectionFactory rabbitConnectionFactory() {
Map<String, Object> properties = new HashMap<String, Object>();
properties.put("publisherConfirms", true);
RabbitConnectionFactoryConfig rabbitConfig = new RabbitConnectionFactoryConfig(properties);
return connectionFactory().rabbitConnectionFactory(rabbitConfig);
}
2.Set rabbitTemplate.setMandatory(true) and setConfirmCallback():
#Bean
public RabbitTemplate rabbitTemplate() {
RabbitTemplate template = new RabbitTemplate(connectionFactory);
template.setMandatory(true);
template.setMessageConverter(new Jackson2JsonMessageConverter());
template.setConfirmCallback((correlationData, ack, cause) -> {
if (!ack) {
System.out.println("send message failed: " + cause + correlationData.toString());
} else {
System.out.println("Publisher Confirm" + correlationData.toString());
}
});
return template;
}
3.Send message to queue to invoke the publisherConfirm and print log.
#Component
public class TestSender {
#Autowired
private RabbitTemplate rabbitTemplate;
#Scheduled(cron = "0/5 * * * * ? ")
public void send() {
this.rabbitTemplate.convertAndSend(EXCHANGE, "routingkey", "hello world",
(Message m) -> {
m.getMessageProperties().setHeader("tenant", "aaaaa");
return m;
}, new CorrelationData(UUID.randomUUID().toString()));
Date date = new Date();
System.out.println("Sender Msg Successfully - " + date);
}
}
But publisherConfirm have not worked.The log have not been printed. Howerver true or false, log shouldn't been absent.
Mandatory is not needed for confirms, only returns.
Some things to try:
Turn on DEBUG logging to see it it helps; there are some logs generated regarding confirms.
Add some code
.
template.execute(channel -> {
system.out.println(channel.getClass());
return null;
}
If you don't see PublisherCallbackChannelImpl then it means the configuration didn't work for some reason. Again DEBUG logging should help with the configuration debugging.
If you still can't figure it out, strip your application to the bare minimum that exhibits the behavior and post the complete application.
I am following this tutorial: https://cwiki.apache.org/confluence/display/CXF20DOC/JAXRS+Testing
But I get this error:
javax.naming.NoInitialContextException:Need to specify class name in environment or system property, or as an applet parameter, or in an application resource file: java.naming.factory.initial
This is my local server class:
public class CXFLocalTransportTestSuite {
public static final Logger LOGGER = LogManager.getLogger();
public static final String ENDPOINT_ADDRESS = "local://service0";
private static Server server;
#BeforeClass
public static void initialize() throws Exception {
startServer();
}
private static void startServer() throws Exception {
JAXRSServerFactoryBean factory = new JAXRSServerFactoryBean();
factory.setAddress(ENDPOINT_ADDRESS);
List<Class<?>> resourceClasses = new ArrayList<Class<?>>();
resourceClasses.add(CommunicationWSRESTImpl.class);
factory.setResourceClasses(resourceClasses);
List<ResourceProvider> resourceProviders = new ArrayList<>();
resourceProviders.add(new SingletonResourceProvider(new CommunicationWSRESTImpl()));
factory.setResourceProviders(resourceProviders);
List<Object> providers = new ArrayList<Object>();
providers.add(new JacksonJaxbJsonProvider());
providers.add(new ApiOriginFilter());
providers.add(new AuthenticationFilter());
providers.add(new AuthorizationFilter());
factory.setProviders(providers);
server = factory.create();
server.start();
LOGGER.info("LOCAL TRANSPORT STARTED");
}
#AfterClass
public static void destroy() throws Exception {
server.stop();
server.destroy();
LOGGER.info("LOCAL TRANSPORT STOPPED");
}
}
And a client example:
public class CommunicationApiTest {
// [PUBLIC PROFILE]
// --------------------------------------------------------------------------------------------------------
#Test
public void getLinkedComponentsTest() {
// PATH. PARAM.
// ********************************************************************************************************
String userId = "1";
String componentInstance = "a3449197-cc72-49eb-bc14-5d43a80dfa80";
String portId = "00";
// ********************************************************************************************************
WebClient client = WebClient.create(CXFLocalTransportTestSuite.ENDPOINT_ADDRESS);
client.path("/communication/getLinkedComponents/{userId}-{componentInstance}-{portId}", userId, componentInstance, portId);
client.header("Authorization", "Bearer " + CXFLocalTransportTestSuite.authenticationTokenPublicProfile);
Response res = client.get();
if (null != res) {
assertEquals(StatusCode.SUCCESSFUL_OPERATION.getStatusCode(), res.getStatus());
assertNotNull(res.getEntity());
// VALID RESPONSE
// ********************************************************************************************************
assertEquals("> Modules has not been initialized for userID = 1", res.readEntity(GetLinksResult.class).getMessage());
// ********************************************************************************************************
}
}
}
Finally, this is the jax-rs implementation on the server side:
#Path("/communication")
public class CommunicationWSRESTImpl implements CommunicationWS {
#Path("/getLinkedComponents/{userId}-{componentInstance}-{portId}")
#GET
#Produces(MediaType.APPLICATION_JSON)
public Response getLinkedComponents(
#HeaderParam("Authorization") String accessToken,
#PathParam("userId") String userId,
#PathParam("componentInstance") String componentInstance,
#PathParam("portId") String portId) {
LOGGER.info("[CommunicationWSREST - getLinksComponents] userId: " + userId + " -- componentInstace: "
+ componentInstance + " -- portId: " + portId);
GetLinksResult result = new GetLinksResult();
result.setGotten(false);
result.setPortList(null);
if (userId != null && userId.compareTo("") != 0) {
if (componentInstance != null && componentInstance.compareTo("") != 0) {
if (portId != null && portId.compareTo("") != 0) {
TMM tmm = null;
javax.naming.Context initialContext;
try {
initialContext = new InitialContext();
tmm = (TMM) initialContext.lookup("java:app/cos/TMM");
result = tmm.calculateConnectedPorts(userId, componentInstance, portId);
} catch (Exception e) {
LOGGER.error(e);
result.setMessage("> Internal Server Error");
return Response.status(Status.INTERNAL_SERVER_ERROR).entity(result).build();
}
} else {
LOGGER.error("Not found or Empty Port Error");
result.setMessage("> Not found or Empty Port Error");
return Response.status(Status.NOT_FOUND).entity(result).build();
}
} else {
LOGGER.error("Not found or Empty Component Instance Error");
result.setMessage("> Not found or Empty Component Instance Error");
return Response.status(Status.NOT_FOUND).entity(result).build();
}
} else {
LOGGER.error("Not found or Empty userid Error");
result.setMessage("> Not found or Empty username Error");
return Response.status(Status.NOT_FOUND).entity(result).build();
}
return Response.ok(result).build();
}
}
Maybe the problem is the local transport is not correctly configured what launches the exception because of the lookup (see: server side):
TMM tmm = null;
javax.naming.Context initialContext;
try {
initialContext = new InitialContext();
tmm = (TMM) initialContext.lookup("java:app/cos/TMM");
result = tmm.calculateConnectedPorts(userId, componentInstance, portId);
} catch (Exception e) {
..
The problem is most likely because you are running your test in a Java SE environment that is not configured with a JNDI server. If you run your test as part of a WAR inside a Java EE app server, this would probably work just fine.
So you might need to either run your unit test inside an app server or you could try mocking a JNDI server like what is described here: http://en.newinstance.it/2009/03/27/mocking-jndi/#
Hope this helps,
Andy
We have two ActiveMQ(version 5.10.0) instances running and I am using the shared storage to achieve HA.
However I am unable to see failover happening for the producer and consumer(s).
ActiveMQ broker-1 runs on IP1 and broker-2 on IP2
And under the activemq.xml of configuration I have modified persistence adapter to use a shared directory which is present on IP1.
<persistenceAdapter>
<kahaDB directory="\\IP1\shared-directory\for activemq\data"/>
</persistenceAdapter>
Both in producer and consumer sides I am using following JNDI configurations to get the connections and build sessions,etc.
jndi.properties
java.naming.factory.initial = ..........ActiveMQInitialContextFactory
java.naming.provider.url = failover:(tcp://IP1:61616,tcp://IP2:61616)?randomize=false
connectionFactoryNames = myConnectionFactory
queue.requestQ = my.RequestQ
Interesting part is :
When I start this broker pair, I see that one of the brokers becomes master.
When I start the producer, which puts the message on the Q (say producer has put 100 messages on the Q). While my producer is still running; I shutdown master broker, hence slave broker acquires the file-lock and becomes master.When I open the webconsole I see that 100 messages are still there on the Q. Even though producer is running it no longer puts any messages on this Q.
Similar to this for the consumers also.
Consumer was picking messages from the Q, this Q has say 100 messages unconsumed when master failed, now master goes down, slave becomes master, I see 100 messages are still unconsumed, but the consumer does not pick any message from the Q.
I waited them to failover for a long time.(>10 mins.)
Can any one please suggest what configuration am I missing ?
I am copy pasting producer and consumer as is (I've copied this from ActiveMQ in action book with minor modifications).
Producer
public class Producer {
private static String brokerURL = "failover:(tcp://IP1:3389,tcp://IP2:3389)";
private static transient ConnectionFactory factory;
private transient Connection connection;
private transient Session session;
private transient MessageProducer producer;
private static int count = 10;
private static int total;
private static int id = 1000000;
private String jobs[] = new String[] { "suspend", "delete" };
public Producer() throws JMSException {
factory = new ActiveMQConnectionFactory(brokerURL);
connection = factory.createConnection();
connection.start();
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
producer = session.createProducer(null);
}
public void close() throws JMSException {
if (connection != null) {
connection.close();
}
}
public static void main(String[] args) throws JMSException {
Producer producer = new Producer();
while (total < 1000) {
for (int i = 0; i < count; i++) {
producer.sendMessage();
}
total += count;
System.out.println("Sent '" + count + "' of '" + total
+ "' job messages");
try {
Thread.sleep(1000);
} catch (InterruptedException x) {
}
}
producer.close();
}
public void sendMessage() throws JMSException {
int idx = 0;
while (true) {
idx = (int) Math.round(jobs.length * Math.random());
if (idx < jobs.length) {
break;
}
}
String job = jobs[idx];
Destination destination = session.createQueue("JOBS." + job);
Message message = session.createObjectMessage(id++);
System.out.println("Sending: id: "
+ ((ObjectMessage) message).getObject() + " on queue: "
+ destination);
producer.send(destination, message);
}
}
Consumer
public class Consumer {
private static String brokerURL = "failover:(tcp://IP1:3389,tcp://IP2:3389)";
private static transient ConnectionFactory factory;
private transient Connection connection;
private transient Session session;
private String jobs[] = new String[] { "suspend", "delete" };
public Consumer() throws JMSException {
factory = new ActiveMQConnectionFactory(brokerURL);
connection = factory.createConnection();
connection.start();
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
}
public void close() throws JMSException {
if (connection != null) {
connection.close();
}
}
public static void main(String[] args) throws JMSException {
Consumer consumer = new Consumer();
for (String job : consumer.jobs) {
Destination destination = consumer.getSession().createQueue(
"JOBS." + job);
MessageConsumer messageConsumer = consumer.getSession()
.createConsumer(destination);
messageConsumer.setMessageListener(new Listener(job));
}
}
public Session getSession() {
return session;
}
}
Just one more thing:
I am more interested in consumer failover than producer.
One more observation is : Consumer stops and comes to the command prompt abruptly.
Thank you.
-JE
I was trying to create a simple parent child process with IPC between them using Hadoop IPC. It turns out that program executes and prints the results but it doesn't exit. Here is the code for it.
interface Protocol extends VersionedProtocol{
public static final long versionID = 1L;
IntWritable getInput();
}
public final class JavaProcess implements Protocol{
Server server;
public JavaProcess() {
String rpcAddr = "localhost";
int rpcPort = 8989;
Configuration conf = new Configuration();
try {
server = RPC.getServer(this, rpcAddr, rpcPort, conf);
server.start();
} catch (IOException e) {
e.printStackTrace();
}
}
public int exec(Class klass) throws IOException,InterruptedException {
String javaHome = System.getProperty("java.home");
String javaBin = javaHome +
File.separator + "bin" +
File.separator + "java";
String classpath = System.getProperty("java.class.path");
String className = klass.getCanonicalName();
ProcessBuilder builder = new ProcessBuilder(
javaBin, "-cp", classpath, className);
Process process = builder.start();
int exit_code = process.waitFor();
server.stop();
System.out.println("completed process");
return exit_code;
}
public static void main(String...args) throws IOException, InterruptedException{
int status = new JavaProcess().exec(JavaProcessChild.class);
System.out.println(status);
}
#Override
public IntWritable getInput() {
return new IntWritable(10);
}
#Override
public long getProtocolVersion(String paramString, long paramLong)
throws IOException {
return Protocol.versionID;
}
}
Here is the child process class. However I have realized that it is due to RPC.getServer() on the server side that it the culprit. Is it some known hadoop bug, or I am missing something?
public class JavaProcessChild{
public static void main(String...args){
Protocol umbilical = null;
try {
Configuration defaultConf = new Configuration();
InetSocketAddress addr = new InetSocketAddress("localhost", 8989);
umbilical = (Protocol) RPC.waitForProxy(Protocol.class, Protocol.versionID,
addr, defaultConf);
IntWritable input = umbilical.getInput();
JavaProcessChild my = new JavaProcessChild();
if(input!=null && input.equals(new IntWritable(10))){
Thread.sleep(10000);
}
else{
Thread.sleep(1000);
}
} catch (Throwable e) {
e.printStackTrace();
} finally{
if(umbilical != null){
RPC.stopProxy(umbilical);
}
}
}
}
We sorted that out via mail. But I just want to give my two cents here for the public:
So the thread that is not dying there (thus not letting the main thread finish) is the org.apache.hadoop.ipc.Server$Reader.
The reason is, that the implementation of readSelector.select(); is not interruptable. If you look closely in a debugger or threaddump, it is waiting on that call forever, even if the main thread is already cleaned up.
Two possible fixes:
make the reader thread a deamon (not so cool, because the selector
won't be cleaned up properly, but the process will end)
explicitly close the "readSelector" from outside when interrupting the threadpool
However, this is a bug in Hadoop and I have no time to look through the JIRAs. Maybe this is already fixed, in YARN the old IPC is replaced by protobuf and thrift anyways.
BTW also this is platform dependend on the implementation of the selectors, I observed these zombies on debian/windows systems, but not on redhat/solaris.
If anyone is interested in a patch for Hadoop 1.0, email me. I will sort out the JIRA bug in the near future and edit this here with more information. (Maybe this is fixed in the meanwhile anyways).
Hi all please give some basic about ActiveMQ with JMS for novice. And configuration steps also.
We are going to create a console based application using multithreading. So create an java project for console application.
Now follow these steps..........
Add javax.jms.jar, activemq-all-5.3.0.jar, log4j-1.2.15.jar to your project library.
(You can download all of above jar files from http://www.jarfinder.com/ .
create a file naming jndi.properties and paste these following texts .. ( Deatils for jndi.properties just Google it)
# START SNIPPET: jndi
java.naming.factory.initial = org.apache.activemq.jndi.ActiveMQInitialContextFactory
# use the following property to configure the default connector
java.naming.provider.url = tcp://localhost:61616
# use the following property to specify the JNDI name the connection factory
# should appear as.
#connectionFactoryNames = connectionFactory, queueConnectionFactory, topicConnectionFactry
connectionFactoryNames = connectionFactory, queueConnectionFactory, topicConnectionFactry
# register some queues in JNDI using the form
# queue.[jndiName] = [physicalName]
queue.MyQueue = example.MyQueue
# register some topics in JNDI using the form
# topic.[jndiName] = [physicalName]
topic.MyTopic = example.MyTopic
# END SNIPPET: jndi
Add JMSConsumer.java
import javax.jms.*;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
public class JMSConsumer implements Runnable{
private static final Log LOG = LogFactory.getLog(JMSConsumer.class);
public void run() {
Context jndiContext = null;
ConnectionFactory connectionFactory = null;
Connection connection = null;
Session session = null;
MessageConsumer consumer = null;
Destination destination = null;
String sourceName = null;
final int numMsgs;
sourceName= "MyQueue";
numMsgs = 1;
LOG.info("Source name is " + sourceName);
/*
* Create a JNDI API InitialContext object
*/
try {
jndiContext = new InitialContext();
} catch (NamingException e) {
LOG.info("Could not create JNDI API context: " + e.toString());
System.exit(1);
}
/*
* Look up connection factory and destination.
*/
try {
connectionFactory = (ConnectionFactory)jndiContext.lookup("queueConnectionFactory");
destination = (Destination)jndiContext.lookup(sourceName);
} catch (NamingException e) {
LOG.info("JNDI API lookup failed: " + e);
System.exit(1);
}
try {
connection = connectionFactory.createConnection();
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
consumer = session.createConsumer(destination);
connection.start();
try {
Thread.sleep(2000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
MessageListener listener = new MyQueueMessageListener();
consumer.setMessageListener(listener );
//Let the thread run for some time so that the Consumer has suffcient time to consume the message
try {
Thread.sleep(5000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
} catch (JMSException e) {
LOG.info("Exception occurred: " + e);
} finally {
if (connection != null) {
try {
connection.close();
} catch (JMSException e) {
}
}
}
}
}
Add JMSProducer.java
import javax.jms.*;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
public class JMSProducer implements Runnable{
private static final Log LOG = LogFactory.getLog(JMSProducer.class);
public JMSProducer() {
}
//Run method implemented to run this as a thread.
public void run(){
Context jndiContext = null;
ConnectionFactory connectionFactory = null;
Connection connection = null;
Session session = null;
Destination destination = null;
MessageProducer producer = null;
String destinationName = null;
final int numMsgs;
destinationName = "MyQueue";
numMsgs = 5;
LOG.info("Destination name is " + destinationName);
/*
* Create a JNDI API InitialContext object
*/
try {
jndiContext = new InitialContext();
} catch (NamingException e) {
LOG.info("Could not create JNDI API context: " + e.toString());
System.exit(1);
}
/*
* Look up connection factory and destination.
*/
try {
connectionFactory = (ConnectionFactory)jndiContext.lookup("queueConnectionFactory");
destination = (Destination)jndiContext.lookup(destinationName);
} catch (NamingException e) {
LOG.info("JNDI API lookup failed: " + e);
System.exit(1);
}
/*
* Create connection. Create session from connection; false means
* session is not transacted.create producer, set the text message, set the co-relation id and send the message.
*/
try {
connection = connectionFactory.createConnection();
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
producer = session.createProducer(destination);
TextMessage message = session.createTextMessage();
for (int i = 0; i
Add MyQueueMessageListener.java
import java.io.*;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import javax.jms.*;
public class MyQueueMessageListener implements MessageListener {
private static final Log LOG = LogFactory.getLog(MyQueueMessageListener.class);
/**
*
*/
public MyQueueMessageListener() {
// TODO Auto-generated constructor stub
}
/** (non-Javadoc)
* #see javax.jms.MessageListener#onMessage(javax.jms.Message)
* This is called on receving of a text message.
*/
public void onMessage(Message arg0) {
LOG.info("onMessage() called!");
if(arg0 instanceof TextMessage){
try {
//Print it out
System.out.println("Recieved message in listener: " + ((TextMessage)arg0).getText());
System.out.println("Co-Rel Id: " + ((TextMessage)arg0).getJMSCorrelationID());
try {
//Log it to a file
BufferedWriter outFile = new BufferedWriter(new FileWriter("MyQueueConsumer.txt"));
outFile.write("Recieved message in listener: " + ((TextMessage)arg0).getText());
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
} catch (JMSException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}else{
System.out.println("~~~~Listener : Error in message format~~~~");
}
}
}
Add SimpleApp.java
public class SimpleApp {
//Run the producer first, then the consumer
public static void main(String[] args) throws Exception {
runInNewthread(new JMSProducer());
runInNewthread(new JMSConsumer());
}
public static void runInNewthread(Runnable runnable) {
Thread brokerThread = new Thread(runnable);
brokerThread.setDaemon(false);
brokerThread.start();
}
}
Now run SimpleApp.java class.
All da best. Happy coding.
Here it is a simple junit test for ActiveMQ and Apache Camel. This two technologies works very good together.
If you want more details about the code, you can find a post in my blog:
http://ignaciosuay.com/unit-testing-active-mq/
public class ActiveMQTest extends CamelTestSupport {
#Override
protected CamelContext createCamelContext() throws Exception {
CamelContext camelContext = super.createCamelContext();
ConnectionFactory connectionFactory = new ActiveMQConnectionFactory("vm://localhost?broker.persistent=false");
camelContext.addComponent("activemq", jmsComponentClientAcknowledge(connectionFactory));
return camelContext;
}
#Override
protected RouteBuilder createRouteBuilder() throws Exception {
return new RouteBuilder() {
#Override
public void configure() throws Exception {
from("mina:tcp://localhost:6666?textline=true&sync=false")
.to("activemq:processHL7");
from("activemq:processHL7")
.to("mock:end");
}
};
}
#Test
public void testSendHL7Message() throws Exception {
MockEndpoint mock = getMockEndpoint("mock:end");
String m = "MSH|^~\\&|hl7Integration|hl7Integration|||||ADT^A01|||2.5|\r" +
"EVN|A01|20130617154644\r" +
"PID|1|465 306 5961||407623|Wood^Patrick^^^MR||19700101|1|\r" +
"PV1|1||Location||||||||||||||||261938_6_201306171546|||||||||||||||||||||||||20130617134644|";
mock.expectedBodiesReceived(m);
template.sendBody("mina:tcp://localhost:6666?textline=true&sync=false", m);
mock.assertIsSatisfied();
}