message deleted from queue - activemq

I have used BlockingQueue implementation to process my events by services from a queue. However in case if the server goes down, all my events from that queue are getting deleted and hence I am missing events to process. (I am looking for some internal DB where server can store the event/messages from queue and if server goes down and up again, it can load all events/messages to process again, without manually intervention).
Any help on this. I am not sure if I should use Apache ActiveMQ. I am using apache servicemix.
Thanks in advance.

I can not answer about how to do this with BlockingQueue.
But ActiveMQ has two features that you will benefit from:
Persistent Queues and possibly you might also want to look at Durable Queues
It has a built in database that just does this under the hood and allows messages to be persisted in queue even if broker or consumer has to restart.

Related

Multiple consumers are created after site recycle. RabbitMq/Masstransit

I use Masstransit in my .net web application to connect to RabbitMq.
Sometime after site recycle, I see lots of consumers on a single queue. I didn't set competing consumers and in normal situation I should have only one consumer per queue.
When this problem happens, my messages get processed very slowly (I assume the time depends on my retry policy) and I have to showdown the site and start it again.
I use Masstransit 5.2.3/RabbitMq 3.7.6.
Could anyone give any clue what the problem could be?

Rabbitmq high availability queues without message replication

I have rabbitMQ broker running on two nodes as a cluster. I have observed that if node, where queue have been created, goes down, then queue would not be available on other node. If I try to publish a message from other node it fails. Even if I remove the failed node from cluster(using forget cluster command) and try to publish message from other node, the behavior is same.
I dont want to enable mirroring of the queue for the simple reason that it would replicate the messages which would be additional load on inter-network.
Is there way available in rabbitMQ to achieve this?
The behaviour you are experiencing is the default behaviour of rabbitmq and its exactly what i supposed to happen.The node where you created the queue becomes the producer now and if this node goes down then any connection available to it or queues or exchanges associated with it will not work at all. There are two options to resolve this issue.
One option is that there is one separate queue for every node and any node that wants to listen to receive messages from a particular node can subscribe to that particular queue's exchange. This seems to be not a very good idea since you need to manage a lot of things for it.
Second option is to always declare a queue before you publish so if your queue is not available then a new queue would take its place and all the nodes subscribe to would be able to listen and any producer node will be able to post that queue. This option will resolve the problems of node getting down or node not available. from the docs
before sending we need to make sure the recipient queue exists. If we send a message to non-existing location, RabbitMQ will just drop the message. Let's create a hello queue to which the message will be delivered:
RabbitMQ lets you import and export definitions. Definitions are json files which contain all broker objects (queues, exchanges, bindings, users, virtual hosts, permissions and parameters). They do not include the messages of queues.
You can export definitions of the node who owns the queue and import them to the slave node of the cluster periodically. You have to enable the management plugin for this task.
More information here: https://www.rabbitmq.com/management.html#configuration

In RabbitMQ, do we need to manage Connections and Channels in a separate thread?

I am new to the world of Message Queues and I am currently evaluating RabbitMQ, ActiveMQ and Kafka. I see that in RabbitMQ, the Producer will create a Connection to the RabbitMQ server and the thread holding the Connection will remain active until the connection is closed. This leads me to believe that there MUST be a separate thread which delivers information to the RMQ Producer thread which will simply publish the message to the queue and keep looping until connection to the RMQ Server is closed? Is this assumption correct? Any thoughts/inputs would be appreciated.
Thanks!
P.S: This isn't the behaviour with Kafka. [ Apache Kafka: Java Producer reusability ]
in general, you should have a single RMQ connection per application instance. that connection can be opened as soon as your application starts.
having a connection does not yet give you the ability to publish or consume messages, though.
to do that, you need to create a channel.
the general best practice is one channel per thread in your application. need to publish a messages from this thread? create a channel for the thread. done with publishing it and not doing any other RMQ work on this channel? close the channel.
unlike connections, channels are cheap and easy to create. they work over the existing RMQ connection, and they take very little resources to create.
you can create thousands of channels in a single connection (though you might want to limit that number for performance reasons)

Using Camel to transparently log messages from queue

I have a legacy application running on Glassfish which I have just recently configured to use activemq rather than openMQ. My activemq broker is running in a separate process outside of glassfish. I was thinking it would be nice to configure a camel route that logs messages as they are sent to the queue. I want to do something like this
from("activemq:myqueue")
.to("activemq:myqueue")
.wireTap("direct:tap")
.to("log:myqueue");
I don't think that makes sense though. What I want to happen is for camel to log the message transparently to the consumer. I don't want to have to change code so that the producer sends to an "inbound" queue and the consumer receives from an "outbound" queue and camel hooks them up, since that would require changes to the legacy app. I don't think this is possible, but just wondering.
Yeah I was about to suggest looking for a broker solution as it would be the most optimized and performant. Obvious monitoring the message flow in the broker is a common requirement and thus ActiveMQ has features for that:
http://activemq.apache.org/mirrored-queues.html
I think I just found out how I can do what I want with mirrored Queues:
http://activemq.apache.org/mirrored-queues.html
This is a change to the broker, and not purely done in camel.

Advice on disconnected messages with WCF through firewalls

All,
I'm looking for advice over the following scenario:
I have a component running in one part of the corporate network that sends messages to an application logic component for processing. These components might reside on the same server, different servers in the same network (LAN ot WAN) or live outside in the cloud. The application server should be scalable and resilient.
The messages are related in that the sequence they arrive is important. They are time-stamped with the client timestamp.
My thinking is that I'll get the clients to use WCF basicHttpBinding (some are based on .NET CF which only has basic) to send messages to the Application Server (this is because we can guarantee port 80/443 will be open for outgoing connections). Server accepts these, and writes these into a queue. This queue can be scaled out if needed over multiple machines.
I'm hesitant to use MSMQ for the queue though as to properly scale out we are going to have to install seperate private queues on each application server and round-robin monitor the queues. I'm concerned though that we could lose a message on a server that's gone down until the server is restored, and we could end up processing a later message from a different server and disrupt the sequence.
What I'd prefer is a central queue (e.g. a database table) that all application servers monitor.
With this in mind, what I'd like to do is to create a custom WCF binding, similar to netMsmqBinding, but that uses the DB table instead but I'm confused as to whether I can simply create a custom transport or a I need a full binding, and whether the binding will allow the client to send over HTTP. I've looked around the internet but I'm a little confused as to where to start.
I could not bother with the custom WCF binding but it seems a good way to introduce scalability if I do need to seperate the servers.
Any suggestions please would be helpful, including alternatives.
Many thanks
I would start with MSMQ because it is exactly for this purpouse. Use single transactional queue on clustered machine and let application servers to take messages for processing from this queue. Each message processing has to be part of distributed transaction (MSDTC).
This scenario will ensure:
clustered queue host will ensure that if one cluster node fails the other will still be able to handle requests
sending each message as recoverable - it means that message will be persisted on hard drive (not only in memory) so in critical failure of the whole cluster you will still have all messages.
transactional queue will ensure that all message transport operations will be atomic - moving message from outgoing queue to destination queue will be processed as transaction. It means that original message from outgoing queue will be kept in queue until ack from destination queue arrives. Transactional processing can ensure in order delivery.
Distributed transaction will allow application servers consuming messages in transaction. Message will not be deleted from queue until application server commits transaction or transaction time outs.
MSMQ is also available on .NET CF so you can send messages directly to queue without intermediate non-reliable web service layer.
It should be possible to configure MSMQ over HTTP (but I have never used it so I'm not sure how it cooperates with previous mentioned features).
Your proposed solution will be pretty hard. You will end up in building BizTalk's MessageBox. But if you really want to do it, check Omar's post about building database queue table.