How to build an ActiveMQ client application? - activemq

I am very new to ActiveMQ, to be exact a first timer. I have build a java application which running an ActiveMQ broker and subscribes to couple of topics. Now I want to build a client application which will publish some messages in some of those topics. But I am not sure how I will get the url of the already running ActiveMQ queue and publish messages from my client application.

Look at http://activemq.apache.org/run-broker.html and look at Java samples http://activemq.apache.org/version-5-examples.html

Java Client For ActiveMQ
TopicSubscriber.java to Subcribe for messages
TopicPublisher.java to to Publish the messages

Related

ActiveMQ Bridge Configuration for Solace

I have a requirement to create a bridge between a Queue in ActiveMQ and a Queue in Solace. When ever there is a message in ActiveMQ Queue it should automatically get transferred to SolaceQ.
I'm struggling to find the steps for this configuration. I have ActiveMQ installed on my local machine. Request you to please throw some light on this.
FYI: I'm very new to ActiveMQ/Solace
I think you'll need to do this via JMS since both ActiveMQ and Solace support it. Check out ActiveMQ's docs here: https://activemq.apache.org/jms-to-jms-bridge

Is there some way to run solace queue locally like with ActiveMQ?

Is there some way to run Solace queue locally like with ActiveMQ?
Below is the explanation of what I mean:
I've a microservice on Spring integretion that is consuming MQ messages, so for testing it locally I run activemq-all.jar, start my microservice and push messages into localhost queue with Hermes JMS.
So the question: is there some way I can do same thing with Solace queues and topics?
You could run a Solace broker in Docker locally on your machine to run your tests against.
Have a look at the docs here to help you get started:
https://docs.solace.com/Solace-SW-Broker-Set-Up/Docker-Containers/Set-Up-Docker-Container-Image.htm

RabbitMQ Manual Retry

How can manual retry work in RabbitMQ after a message has been put onto dead letter queue?
Does RabbitMQ provide an user interface through which you can do this? I assume here that RabbitMQ console does not provide you this capability.
The Rabbit MQ management interface would let you do this crudely, you can go into the deadletter queue, 'get' the message then copy the content. Go to the queue you want to retry the message on and 'publish' it directly to that queue.
Alternatively, you can enable the shovel plugin which allows you to move messages from one queue to another. The RabbitMQ Management plugin directly contains instructions on how to do this.
You can write a consumer / producer using a number of various client libraries. For python a popular library is pika (https://pypi.python.org/pypi/pika).
The script can consume all the messages in a queue, then publish them to another queue.

JMS queue not synchronized in instances of GlassFish cluster

I have a problem using Message Driven Beans in CLUSTERED Glassfish 3.1.1. The problem is with the queue in the Glassfish, the queue is not synchronized between the instances. I am trying by best to explain the scenario below.
I created 2 instances in a GlassFish cluster, created a JMS QueueConnectionFactory, created a JMS Queue. Their targets were made towards the cluster. Then I deployed the web application and the MessageDrivenBean module in the cluster. The web application sends a TextMessage to the JMS Queue.
Everything works well here, like the message is sent to the queue and served by the message driven beans in both the instances.
Then I disable the MessageDrivenBean module. Request the web application which sends the message to the JMS Queue in both the instances. Then I shutdown myInstance2. Re-deploy the MDB in the cluster. Now here is the problem, the MessageDrivenBean only receives the messages of myInstance1 and not the messages sent to myInstance2 queue. The messages in the queue of myInstance2 are only served when myInstance2 is started. Can anyone help me here with the settings that GlassFish uses to synchronize the queue in both the instance so that even for some reason when one instance is down and there are messages in that instance’s queue, the other instance will take the messages of that queue and serve them.
I am using OpenMQ, GlassFish 3.1.1 and I have turned on the HA(high availability) option in GlassFish, but still it does not work.
Thanks
The high-availability options for GlassFish and the high-availability options for Message Queue are configured separately. You need to configure your message queue cluster to be an "enhanced cluster" rather than a "conventional cluster". This is described in the GlassFish 3.1 High Availability Administration Guide.

Deploying java client, RabbitMQ, and Celery to server

I have a Java API on my server, and I want it to create tasks and add them to Celery via RabbitMQ. I followed the following tutorial, http://www.rabbitmq.com/tutorials/tutorial-two-python.html, where I used java for the client (send.java) and python to receive (receive.py). In receive.py, where the callback method is invoked, I call a method that I've annotated with #celery.task so that the task is added to celery.
I'm wondering how all of this is deployed on a server though, specifically, why there is a receive.py file. Is receive.py a process that must continually run on the server? Is there a way to configure RabbitMQ so that it automatically routes java client tasks to celery?
Thanks!
RabbitMQ is just a message queue. Producers put messages and consumers get them on demand. You can only restrict access for specific queues via RabbitMQ's auth options.
As for deployment: yes, receive.py needs to continuously run. It is Celery's job to do that. See the Workers Guide for info on running a worker.