How can I delete/remove an ActiveMQ subscriber using NMS API - activemq

I need to remove/delete my topic subscriber. I found this http://activemq.apache.org/manage-durable-subscribers.html
However, it's not good enough for us. We want to control the timing of removing a subscriber, and no matter there are any message or not. Besides, our program is written by C#. So the best solution for us is NMS API.
Thanks.
Here are the code,
Apache.NMS.ActiveMQ.ConnectionFactory factory = new Apache.NMS.ActiveMQ.ConnectionFactory(m_brokerURI);
m_connection = factory.CreateConnection(username, password);
Apache.NMS.ActiveMQ.Connection con = (Apache.NMS.ActiveMQ.Connection)m_connection;
ISession session = m_connection.CreateSession(AcknowledgementMode.AutoAcknowledge);
try
{
session.DeleteDurableConsumer(strQueueName);
}
catch (Exception ex)
{
// log the error message
}
Update
Our scenario is quite simple.
A client built a queue and subscribed a consumer on a topic.
the client side closed the connection.
delete the consumer on the server side(as the example code in the last update)
Here is the snapshot of activemq broker via jconsole:
jconsole snapshot
We would like to remove the subscriber “7B0FD84D-6A2A-4921-967F-92B215E22751” by following method,
But always got this error "javax.jms.InvalidDestinationException : No durable subscription exists for: 7B0FD84D-6A2A-4921-967F-92B215E22751"
strSubscriberName = “7B0FD84D-6A2A-4921-967F-92B215E22751”
session.DeleteDurableConsumer(strSubscriberName);

To delete a durable subscription from the NMS API you use the DeleteDurableConsumer method defined in ISession. You must call this method from a Connection that uses the same client Id as was used when the subscription was created and you pass the name of the subscription that is to be removed. The method will fail if there is an active subscriber though so be prepared for that exception.
In the sample code you don't set a Client Id on the connection. When working with durable subscriptions you must, must, MUST always use the same client Id and subscription name. So in you same you will get this error until you set the client Id to the same value as the connection that created the subscription in the first place.

Related

How to send a message to an endpoint in Mule 4 to trigger a flow

With Mule 3 it was possible to send messages asynchronously to an endpoint using MuleClient:
MuleClient client = new MuleClient(muleContext);
client.dispatch("vm://vm.queue", "Message Payload", null);
Is there a way to migrate this functionality in Mule 4 since MuleClient has been removed?
I came across a post that suggested getting the flow by name and publishing the message to the flow as follows
Flow flow = registry.lookupByName("MyFlow").get();
InputEvent event = new DefaultInputEvent();
event.message(Message.of(payload));
flow.execute(event);
but I get a ClassNotFoundException for the class org.mule.runtime.internal.event.DefaultInputEvent
Using Harshank's recommendation I was able to push messages to a flow simply by getting a reference to the flow and triggering the flow by sending messages to the source.
Flow flow = registry.lookupByName(flowName).get();
ComponentLocation location = DefaultComponentLocation.from(flowName + "/source");
...
Message message = Message.of(payload);
CoreEvent coreEvent = CoreEvent.builder(EventContextFactory.create(flow, location)).message(message).build();
flow.process(coreEvent);
This is a much cleaner solution than what is implemented in the blog and works from beans initialized in the Spring module. As aled mentioned, this is bad practice, but in the interest of time it is a solution.

Selectively consume messages based on message body attributes in RabbitMQ

Let's say I have a situation where I need to wait for up to 1 minute for some action to be performed.
If it is expired, then try different action.
My current solution proposal is based on RabbitMQ features.
I would create following resources:
#Bean
DirectExchange exchangeDirect() {
return new DirectExchange("exchange.direct");
}
#Bean
Queue bufferQueue() {
Map<String, Object> args = new HashMap<>();
args.put("x-message-ttl", amqpProperties.getTimeToLive().toMillis());
args.put("x-dead-letter-exchange", "exchange.direct");
args.put("x-dead-letter-routing-key", "timedOutQueue");
return new Queue("buffer.queue", true, false, false, args);
}
#Bean
Queue timedOutQueue() {
return new Queue("timed.out.queue", true);
}
#Bean
Binding bufferQueueToExchangeDirect() {
return bind(bufferQueue())
.to(exchangeDirect())
.with("buffer.queue");
}
#Bean
Binding timedOutQueueToExchangeDirect() {
return bind(timedOutQueue())
.to(exchangeDirect())
.with("timed.out.queue");
}
When I add action to bufferQueue and I don't receive any delivery update within 1 minute, this request is then moved to timedOutQueue thanks to bufferQueue's TTL.
I can attach application rabbit listener to timedOutQueue and use different action.
When I add action to bufferQueue and I receive confirmation that action was successfully performed, I'd like to remove given action event from bufferQueue.
I couldn't find such feature in RabbitMQ, i.e. being able to receive selectively.
I also found some articles saying that selective consuming is antipattern.
Is it possible to selectively consume messages from RabbitMQ queue?
What is proper way to implement this pattern in RabbitMQ?
There is no concept of message selection in RabbitMQ.
The "proper" way for an application that wants to selectively receive messages is to use multiple queues/routing keys with a consumer on each specific queue he expresses interest in.
However, there is no way to "remove" a message from the middle of a queue; only the head.
When I add action to bufferQueue and I receive confirmation that action was successfully performed, I'd like to remove given action event from bufferQueue.
That makes no sense to me; when the message timed out in bufferQueue due to TTL, and was moved to timedOutQueue, it no longer exists in bufferQueue so there is nothing to remove.
There is also no mechanism to ...
and I don't receive any delivery update within 1 minute,
... because each message in a queue is independent.
It doesn't sound like your application is suitable for a message broker at all.

User destinations in a multi-server environment? (Spring WebSocket and RabbitMQ)

The documentation for Spring WebSockets states:
4.4.13. User Destinations
An application can send messages targeting a specific user, and Spring’s STOMP support recognizes destinations prefixed with "/user/" for this purpose. For example, a client might subscribe to the destination "/user/queue/position-updates". This destination will be handled by the UserDestinationMessageHandler and transformed into a destination unique to the user session, e.g. "/queue/position-updates-user123". This provides the convenience of subscribing to a generically named destination while at the same time ensuring no collisions with other users subscribing to the same destination so that each user can receive unique stock position updates.
Is this supposed to work in a multi-server environment with RabbitMQ as broker?
As far as I can tell, the queue name for a user is generated by appending the simpSessionId. When using the recommended client library stomp.js this results in the first user getting the queue name "/queue/position-updates-user0", the next gets "/queue/position-updates-user1" and so on.
This in turn means the first users to connect to different servers will subscribe to the same queue ("/queue/position-updates-user0").
The only reference to this I can find in the documentation is this:
In a multi-application server scenario a user destination may remain unresolved because the user is connected to a different server. In such cases you can configure a destination to broadcast unresolved messages to so that other servers have a chance to try. This can be done through the userDestinationBroadcast property of the MessageBrokerRegistry in Java config and the user-destination-broadcast attribute of the message-broker element in XML.
But this only makes the it possible to communicate with a user from a different server than the one where the web socket is established.
I feel I'm missing something? Is there anyway to configure Spring to be able to safely use MessagingTemplate.convertAndSendToUser(principal.getName(), destination, payload) in a multi-server environment?
If they need to be authenticated (I assume their credentials are stored in a database) you can always use their database unique user id to subscribe to.
What I do is when a user logs in they are automatically subscribed to two topics an account|system topic for system wide broadcasts and account|<userId> topic for specific broadcasts.
You could try something like notification|<userid> for each person to subscribe to then send messages to that topic and they will receive it.
Since user Ids are unique to each user you shouldn't have an issue within a clustered environment as long as each environment is hitting the same database information.
Here is my send method:
public static boolean send(Object msg, String topic) {
try {
String destination = topic;
String payload = toJson(msg); //jsonfiy the message
Message<byte[]> message = MessageBuilder.withPayload(payload.getBytes("UTF-8")).build();
template.send(destination, message);
return true;
} catch (Exception ex) {
logger.error(CommService.class.getName(), ex);
return false;
}
}
My destinations are preformatted so if i want to send a message to user with id of one the destinations looks something like /topic/account|1.
Ive created a ping pong controller that tests websockets for users who connect to see if their environment allows for websockets. I don't know if this will help you but this does work in my clustered environment.
/**
* Play ping pong between the client and server to see if web sockets work
* #param input the ping pong input
* #return the return data to check for connectivity
* #throws Exception exception
*/
#MessageMapping("/ping")
#SendToUser(value="/queue/pong", broadcast=false) // send only to the session that sent the request
public PingPong ping(PingPong input) throws Exception {
int receivedBytes = input.getData().length;
int pullBytes = input.getPull();
PingPong response = input;
if (pullBytes == 0) {
response.setData(new byte[0]);
} else if (pullBytes != receivedBytes) {
// create random byte array
byte[] data = randomService.nextBytes(pullBytes);
response.setData(data);
}
return response;
}

Azure Queue, AddMessage then UpdateMessage

Is it possible to Add a message to an Azure queue then, in the same flow, update or delete that message?
The idea would be to use the queue to ensure that some work gets done - there's a worker role monitoring that queue. But, the Web role which added the message may be able to make some progress toward (and sometimes even to complete) the transaction.
The worker would already be designed to handle double-delivery and reprocessing partially handled messages (from previous, failed worker attempts) - so there isn't a technical problem here, just time inefficiency and some superfluous storage transactions.
So far it seems like adding the message allows for a delivery delay, giving the web role some time, but doesn't give back a pop-receipt which it seems like we'd need to update/delete the message. Am I missing something?
It seems this feature was added as part of the "2016-05-31” REST API
we now make pop receipt value available in the Put Message (aka Add Message) response which allows users to update/delete a message without the need to retrieve the message first.
I suggest you follow these steps as it worked for me
How to: Create a queue
A CloudQueueClient object lets you get reference objects for queues. The following code creates a CloudQueueClient object. All code in this guide uses a storage connection string stored in the Azure application's service configuration. There are also other ways to create a CloudStorageAccount object. See CloudStorageAccount documentation for details.
// Retrieve storage account from connection string
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
CloudConfigurationManager.GetSetting("StorageConnectionString"));
// Create the queue client
CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();
Use the queueClient object to get a reference to the queue you want to use. You can create the queue if it doesn't exist.
// Retrieve a reference to a queue
CloudQueue queue = queueClient.GetQueueReference("myqueue");
// Create the queue if it doesn't already exist
queue.CreateIfNotExists();
How to: Insert a message into a queue
To insert a message into an existing queue, first create a new CloudQueueMessage. Next, call the AddMessage method. A CloudQueueMessage can be created from either a string (in UTF-8 format) or a byte array. Here is code which creates a queue (if it doesn't exist) and inserts the message 'Hello, World':
// Retrieve storage account from connection string.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
CloudConfigurationManager.GetSetting("StorageConnectionString"));
// Create the queue client.
CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();
// Retrieve a reference to a queue.
CloudQueue queue = queueClient.GetQueueReference("myqueue");
// Create the queue if it doesn't already exist.
queue.CreateIfNotExists();
// Create a message and add it to the queue.
CloudQueueMessage message = new CloudQueueMessage("Hello, World");
queue.AddMessage(message);
For more details, refer this link.
http://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-how-to-use-queues/
Girish Prajwal

ActiveMQ: multi-consumers connected to one queue but only one consumer recieve all the messages

I was currently using NMS to develop application based ActiveMQ(5.6).
We have several consumers(exe) trying to recieving massgaes from the same queue(not topic). While all the messages just all go to one consumer though I have make the consumer to sleep for seconds after recieving a message. By the way, we don't want the consumers recieving the same messages other consumers have recieved.
It is mentioned in the official website that we should set Prefetch Limit to decide how many messages can be streamed to a consumer at any point in time. And it can both be configured and coded.
One way I tried is to code using PrefetchPolicy class binding the ConnectionFactory class like bellow.
PrefetchPolicy poli = new PrefetchPolicy();
poli.QueuePrefetch = 0;
ConnectionFactory fac = new ConnectionFactory("activemq:tcp://Localhost:61616?jms.prefetchPolicy.queuePrefetch=1");
fac.PrefetchPolicy = poli;
using (IConnection con = fac.CreateConnection())
{
using (ISession se = con.CreateSession())
{
IDestination destination = SessionUtil.GetDestination(se, queue, DestinationType.Queue);
using (IMessageConsumer consumer = se.CreateConsumer(queue1))
{
con.Start();
while (true)
{
ITextMessage message = consumer.Receive() as ITextMessage;
Thread.Sleep(2000);
if (message != null)
{
Task.Factory.StartNew(() => extractAndSend(message.Text)); //do something
}
else
{
Console.WriteLine("No message received~");
}
}
}
}
}
But no matter what prefetch value I set the behavior of the consumers stay the same as before.
And I've tried the second way tying to get the result, namely configure the server conf file. I change the activemq.xml of the server like bellow.
" producerFlowControl="true" memoryLimit="5mb" />
" producerFlowControl="true" memoryLimit="5mb">
But though I've set the dispatchpolicy the messages still go to one consumer.
I want to know that:
Whether this behavior can be achieved by just configuring the server xml file to enable all the consumers recieve messages from one queue? If so, how to configure this and what is wrong with my configuration? If not, how can I use codes to achieve the goal?
Thanks.
Take a look at "Message Groups" feature.
I had the same problem. Only one consumer processed all messages. I found in my code I used group header during send:
request.Properties["NMSXGroupID"] = "cheese";
According to official docs:
Standard JMS header JMSXGroupID is used to define which message group
the message belongs to. The Message Group feature then ensures that
all messages for the same message group will be sent to the same JMS
consumer - while that consumer stays alive. As soon as the consumer
dies another will be chosen.
See full details at http://activemq.apache.org/message-groups.html