What is the use case of BrokerService in ActiveMQ and how to use it correctly - activemq

I am new about ActiveMQ. I'm trying to study and check how it works by checking the example code provided by Apache at this link:-
http://activemq.apache.org/how-should-i-implement-request-response-with-jms.html
public class Server implements MessageListener {
private static int ackMode;
private static String messageQueueName;
private static String messageBrokerUrl;
private Session session;
private boolean transacted = false;
private MessageProducer replyProducer;
private MessageProtocol messageProtocol;
static {
messageBrokerUrl = "tcp://localhost:61616";
messageQueueName = "client.messages";
ackMode = Session.AUTO_ACKNOWLEDGE;
}
public Server() {
try {
//This message broker is embedded
BrokerService broker = new BrokerService();
broker.setPersistent(false);
broker.setUseJmx(false);
broker.addConnector(messageBrokerUrl);
broker.start();
} catch (Exception e) {
System.out.println("Exception: "+e.getMessage());
//Handle the exception appropriately
}
//Delegating the handling of messages to another class, instantiate it before setting up JMS so it
//is ready to handle messages
this.messageProtocol = new MessageProtocol();
this.setupMessageQueueConsumer();
}
private void setupMessageQueueConsumer() {
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory(messageBrokerUrl);
Connection connection;
try {
connection = connectionFactory.createConnection();
connection.start();
this.session = connection.createSession(this.transacted, ackMode);
Destination adminQueue = this.session.createQueue(messageQueueName);
//Setup a message producer to respond to messages from clients, we will get the destination
//to send to from the JMSReplyTo header field from a Message
this.replyProducer = this.session.createProducer(null);
this.replyProducer.setDeliveryMode(DeliveryMode.NON_PERSISTENT);
//Set up a consumer to consume messages off of the admin queue
MessageConsumer consumer = this.session.createConsumer(adminQueue);
consumer.setMessageListener(this);
} catch (JMSException e) {
System.out.println("Exception: "+e.getMessage());
}
}
public void onMessage(Message message) {
try {
TextMessage response = this.session.createTextMessage();
if (message instanceof TextMessage) {
TextMessage txtMsg = (TextMessage) message;
String messageText = txtMsg.getText();
response.setText(this.messageProtocol.handleProtocolMessage(messageText));
}
//Set the correlation ID from the received message to be the correlation id of the response message
//this lets the client identify which message this is a response to if it has more than
//one outstanding message to the server
response.setJMSCorrelationID(message.getJMSCorrelationID());
//Send the response to the Destination specified by the JMSReplyTo field of the received message,
//this is presumably a temporary queue created by the client
this.replyProducer.send(message.getJMSReplyTo(), response);
} catch (JMSException e) {
System.out.println("Exception: "+e.getMessage());
}
}
public static void main(String[] args) {
new Server();
}
}
My confusion about the messageBrokerUrl = "tcp://localhost:61616"; You know ActiveMQ service is running on port 61616 by default. Why does this example chooses same port. If I try to run the code thows eception as:
Exception: Failed to bind to server socket: tcp://localhost:61616 due to: java.net.BindException: Address already in use: JVM_Bind
Perhaps if I change the port number, I can execute the code.
Please let me know why it is like this in the example and how to work with BrokerService.

The BrokerService in this example is trying to create an in memory ActiveMQ broker for use in the example. Given the error you are seeing I'd guess you already have an ActiveMQ broker running on the machine that is bound to port 61616 as that's the default port and thus the two are conflicting. You could either stop the external broker and run the example or modify the example to not run the embedded broker and just rely on your external broker instance.
Embedded brokers are great for unit testing or for creating examples that don't require the user to have a broker installed and running.

Related

RabbitMQ Camel Consumer - Consume a single message

I have a scenario where I want to "pull" messages of a RabbitMQ queue/topic and process them one at a time.
Specifically if there are already messages sitting on the queue when the consumer starts up.
I have tried the following with no success (meaning, each of these options reads the queue until it is either empty or until another thread closes the context).
1.Stopping route immediately it is first processed
final CamelContext context = new DefaultCamelContext();
try {
context.addRoutes(new RouteBuilder() {
#Override
public void configure() throws Exception {
RouteDefinition route = from("rabbitmq:harley?queue=IN&declare=false&autoDelete=false&hostname=localhost&portNumber=5672");
route.process(new Processor() {
Thread stopThread;
#Override
public void process(final Exchange exchange) throws Exception {
String name = exchange.getIn().getHeader(Exchange.FILE_NAME_ONLY, String.class);
String body = exchange.getIn().getBody(String.class);
// Doo some stuff
routeComplete[0] = true;
if (stopThread == null) {
stopThread = new Thread() {
#Override
public void run() {
try {
((DefaultCamelContext)exchange.getContext()).stopRoute("RabbitRoute");
} catch (Exception e) {}
}
};
}
stopThread.start();
}
});
}
});
context.start();
while(!routeComplete[0].booleanValue())
Thread.sleep(100);
context.stop();
}
Similar to 1 but using a latch rather than a while loop and sleep.
Using a PollingConsumer
final CamelContext context = new DefaultCamelContext();
context.start();
Endpoint re = context.getEndpoint(srcRoute);
re.start();
try {
PollingConsumer consumer = re.createPollingConsumer();
consumer.start();
Exchange exchange = consumer.receive();
String bb = exchange.getIn().getBody(String.class);
consumer.stop();
} catch(Exception e){
String mm = e.getMessage();
}
Using a ConsumerTemplate() - code similar to above.
I have also tried enabling preFetch and setting the max number of exchanges to 1.
None of these appear to work, if there are 3 messages on the queue, all are read before I am able to stop the route.
If I were to use the standard RabbitMQ Java API I would use a basicGet() call which lets me read a single message, but for other reasons I would prefer to use a Camel consumer.
Has anyone successfully been able to process a single message on a queue that holds multiple messages using a Camel RabbitMQ Consumer?
Thanks.
This is not the primary intention of the component as its for continued received. But I have created a ticket to look into supporting a basicGet (single receive). There is a new spring based rabbitmq component coming in 3.8 onwards so its going to be implemeneted there (first): https://issues.apache.org/jira/browse/CAMEL-16048

RabbitMQ queue declare never ends

I'm just trying to make a simple test for RabbitMQ, and I have Erlang installed as well as RabbitMQ running.
My receiver:
private final static String QUEUE_NAME = "hello";
public static void main(String[] argv) throws Exception {
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
channel.queueDeclare(QUEUE_NAME, false, false, false, null);
System.out.println(" [*] Waiting for messages. To exit press CTRL+C");
Consumer consumer = new DefaultConsumer(channel) {
#Override
public void handleDelivery(String consumerTag, Envelope envelope,
BasicProperties properties, byte[] body) throws IOException
{
// TODO Auto-generated method stub
String message = new String(body, "UTF-8");
System.out.println(" [x] Received '" + message + "'");
}
};
channel.basicConsume(QUEUE_NAME, true, consumer);
}
It never prints out the first sysout, because it gets stuck declaring the queue on "channel.queueDeclare" line.
Rabbit log says it is accepting AMQP connection and user guest gets authenticated and granted access to vhost.
Any help would be appreciated.
I just copied/pasted your code with no problems...
[*] Waiting for messages. To exit press CTRL+C
[x] Received 'foo'
I suggest you enable the management plugin and explore the admin UI.
Why did you add the spring-amqp spring-rabbitmq tags since this question has nothing to do with Spring and you are using the native client directly?

TCP Server configuration in Mule - writing into client socket

I am trying to create a mule flow with a TCP inbound endpoint which is a TCP server that listens to a port. When a successful client connection is identified, before receiving any request from the client, I need to write a message into the socket (which lets the client know that I am listening), only after which the client sends me further requests. This is how I do it with a sample java program :
import java.net.*;
import java.io.*;
public class TCPServer
{
public static void main(String[] args) throws IOException
{
ServerSocket serverSocket = null;
try {
serverSocket = new ServerSocket(4445);
}
catch (IOException e)
{
System.err.println("Could not listen on port: 4445.");
System.exit(1);
}
Socket clientSocket = null;
System.out.println ("Waiting for connection.....");
try {
clientSocket = serverSocket.accept();
}
catch (IOException e)
{
System.err.println("Accept failed.");
System.exit(1);
}
System.out.println ("Connection successful");
System.out.println ("Sending output message - .....");
//Sending a message to the client to indicate that the server is active
PrintStream pingStream = new PrintStream(clientSocket.getOutputStream());
pingStream.print("Server listening");
pingStream.flush();
//Now start listening for messages
System.out.println ("Waiting for incoming message - .....");
PrintWriter out = new PrintWriter(clientSocket.getOutputStream(),true);
BufferedReader in = new BufferedReader(new InputStreamReader(clientSocket.getInputStream()));
String inputLine;
while ((inputLine = in.readLine()) != null)
{
System.out.println ("Server: " + inputLine);
out.println(inputLine);
if (inputLine.equals("Bye."))
break;
}
out.close();
in.close();
clientSocket.close();
serverSocket.close();
}
}
I have tried to use Mule's TCP inbound endpoint as a server, but I am not able to see how I can identify a successful connection from the client, inorder to trigger the outbound message. The flow gets triggered only when a message is sent across from the client. Is there a way I can extend the functionality of the Mule TCP connector and have a listener which could do the above requirement?
Based on the answer provided, this is how I implemented this -
public class TCPMuleOut extends TcpMessageReceiver {
boolean InitConnection = false;
Socket clientSocket = null;
public TCPMuleOut(Connector connector, FlowConstruct flowConstruct,
InboundEndpoint endpoint) throws CreateException {
super(connector, flowConstruct, endpoint);
}
protected Work createWork(Socket socket) throws IOException {
return new MyTcpWorker(socket, this);
}
protected class MyTcpWorker extends TcpMessageReceiver.TcpWorker {
public MyTcpWorker(Socket socket, AbstractMessageReceiver receiver)
throws IOException {
super(socket, receiver);
// TODO Auto-generated constructor stub
}
#Override
protected Object getNextMessage(Object resource) throws Exception {
if (InitConnection == false) {
clientSocket = this.socket;
logger.debug("Sending logon message");
PrintStream pingStream = new PrintStream(
clientSocket.getOutputStream());
pingStream.print("Log on message");
pingStream.flush();
InitConnection = true;
}
long keepAliveTimeout = ((TcpConnector) connector)
.getKeepAliveTimeout();
Object readMsg = null;
try {
// Create a monitor if expiry was set
if (keepAliveTimeout > 0) {
((TcpConnector) connector).getKeepAliveMonitor()
.addExpirable(keepAliveTimeout,
TimeUnit.MILLISECONDS, this);
}
readMsg = protocol.read(dataIn);
// There was some action so we can clear the monitor
((TcpConnector) connector).getKeepAliveMonitor()
.removeExpirable(this);
if (dataIn.isStreaming()) {
}
return readMsg;
} catch (SocketTimeoutException e) {
((TcpConnector) connector).getKeepAliveMonitor()
.removeExpirable(this);
System.out.println("Socket timeout");
} finally {
if (readMsg == null) {
// Protocols can return a null object, which means we're
// done
// reading messages for now and can mark the stream for
// closing later.
// Also, exceptions can be thrown, in which case we're done
// reading.
dataIn.close();
InitConnection = false;
logger.debug("Client closed");
}
}
return null;
}
}
}
And the TCP connector is as below:
<tcp:connector name="TCP" doc:name="TCP connector"
clientSoTimeout="100000" receiveBacklog="0" receiveBufferSize="0"
sendBufferSize="0" serverSoTimeout="100000" socketSoLinger="0"
validateConnections="true" keepAlive="true">
<receiver-threading-profile
maxThreadsActive="5" maxThreadsIdle="5" />
<reconnect-forever />
<service-overrides messageReceiver="TCPMuleOut" />
<tcp:direct-protocol payloadOnly="true" />
</tcp:connector>
What you're trying to do is a little difficult to accomplish but not impossible. The messages are received by the org.mule.transport.tcp.TcpMessageReceiver class, and this class always consumes the data in the input stream to create the message that injects in the flow.
However, you could extend that receiver and instruct the TCP module to use yours by adding a service-overrides tag in your flow's tcp connector (documented here) and replacing the messageReceiver element.
In your extended receiver you should change the TcpWorker.getNextMessage method in order to send the ack message before read from the input stream.
HTH, Marcos.

Timeout of basicPublish when server is outofspace

My case is rabbitmq server got out of space, just as below
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/ramonubuntu--vg-root 6299376 5956336 0 100% /
The producer publishes message to server(the message needs to be persisted), and then will be blocked forever, it will keeping waiting the response of publishing. Sure we should avoid the situation of server out of space, but is there any timeout mechanism to let producer quit the waiting?
I have tried heartbeat and SO_TIMEOUT, they both don't work, as the network works fine. Below is my producer.
protected void publish(byte[] message) throws Exception {
// ConnectionFactory can be reused between threads.
ConnectionFactory factory = new SoTimeoutConnectionFactory();
factory.setHost(this.getHost());
factory.setVirtualHost("te");
factory.setPort(5672);
factory.setUsername("amqp");
factory.setPassword("amqp");
factory.setConnectionTimeout(10 * 1000);
// doesn't help if server got out of space
factory.setRequestedHeartbeat(1);
final Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
// declare a 'topic' type of exchange
channel.exchangeDeclare(this.exchangeName, "topic", true);
channel.addReturnListener(new ReturnListener() {
#Override
public void handleReturn(int replyCode, String replyText, String exchange, String routingKey,
AMQP.BasicProperties properties, byte[] body) throws IOException {
logger.warn("[X]Returned message(replyCode:" + replyCode + ",replyText:" + replyText
+ ",exchange:" + exchange + ",routingKey:" + routingKey + ",body:" + new String(body));
}
});
channel.confirmSelect();
channel.addConfirmListener(new ConfirmListener() {
#Override
public void handleAck(long deliveryTag, boolean multiple) throws IOException {
logger.info("Ack: " + deliveryTag);
// RabbitMessagePublishMain.this.release(connection);
}
#Override
public void handleNack(long deliveryTag, boolean multiple) throws IOException {
logger.info("Nack: " + deliveryTag);
// RabbitMessagePublishMain.this.release(connection);
}
});
channel.basicPublish(this.exchangeName, RabbitMessageConsumerMain.EXCHANGE_NAME + ".-1", true,
MessageProperties.PERSISTENT_BASIC, message);
channel.waitForConfirmsOrDie(10*1000);
// now we can close connection
connection.close();
}
It will block at 'channel.waitForConfirmsOrDie(10*1000);', and the SotimeoutConnectionFactory,
public class SoTimeoutConnectionFactory extends ConnectionFactory {
#Override
protected void configureSocket(Socket socket) throws IOException {
super.configureSocket(socket);
socket.setSoTimeout(10 * 1000);
}
}
Also I captured the network between producer and rabbimq,
Please help.
You need to implement Connection Block/Unblocked.
This is basically a way of notifying the publisher that the server is running out of resources. The advantage with this is that the publisher will also be notified once it is safe to publish again.
I would recommend that you take a look at this article. A simple way of implementing this is to have a flag that indicates if it is safe to publish, if it is not wait until it is.
As an example you can take a look on how I implemented this in one of my Python examples.

what host to set for RabbitMQ send test

So I am having trouble understanding how to use rabbitmq. I have the following Send.java class I am using RabbitMQ to send a message:
import com.rabbitmq.client.ConnectionFactory;
import com.rabbitmq.client.Connection;
import com.rabbitmq.client.Channel;
public class Send {
private final static String QUEUE_NAME = "hello";
public static void main(String[] argv) throws java.io.IOException {
// create a connection to the server
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("WHAT DO I PUT HERE"); // <==========================What do I put here??
Connection connection = factory.newConnection();
// next we create a channel, which is where most of the API
// for getting things done resides
Channel channel = connection.createChannel();
// to send we must declare a queue for us to send to; then
// we can publish a message to the queue:
channel.queueDeclare(QUEUE_NAME, false, false, false, null);
String message = "Hello World!";
channel.basicPublish("", QUEUE_NAME, null, message.getBytes());
System.out.println(" [x] Sent '" + message + "'");
channel.close();
connection.close();
}
}
I don't really understand what I can set the host as to make this run as a Java application. Can someone explain? Thanks!
It's really self-explanatory... This method sets the default host to use for connection.
If you're using RabbitMQ on your local machine you can set it like this:
factory.setHost("localhost");
Or alternatively:
factory.setHost("127.0.0.1");