MassTransit with RabbitMQ: recovering the error queue - rabbitmq

This is probably a very simple answer, but I'm not seeing an obvious solution in the MassTransit docs or forums.
When you have some messages that have been moved over to the error queue in RabbitMQ, what's the best mechanism for getting them back into the processing queue? Also, is there any built-in logging of why they got moved over there in the first place?

Enable logging with the right plugin (NLog, log4net, etc) and failures should be in the log, assuming the right log level is enabled.
There is no great way to move messages back. Dru has worked on a busdriver tool https://github.com/MassTransit/MassTransit/tree/master/src/Tools/BusDriver. This, I believe, will allow you move items from one queue to another - but it's not a tool I've used. I have historically written tools that are related to business processes to move items back to the proper queue for processing that ops will manage.

Related

NServiceBus ServiceInsight - Monitor Multiple Error and Audit

I have a couple questions regarding ServiceInsight that I was hoping someone could shed some light on.
Can I monitor multiple error queues and audit queues? If so how do I configure it to monitor those queues.
I understand that messages processed in the error queue are moved to the error.log queue. What happens to the messages processed in the audit queue, i.e where do they go after the management service processes them.
Where are the messages ultimately stored by the management process, i.e. are they stored in RavenDB and if so under what database.
In addition, how do I remove or delete message conversations in the endpoint explorer. For example, let’s say I just want to clear everything out.
Any additional insight (no pun intended) you can provide regarding the management and use of insight would be greatly appreciated.
Question: Can I monitor multiple error queues and audit queues? If so how do I configure it to monitor those queues.
Answer: ServiceInsight receives its data from a management service (AKA "ServiceControl") that collects its data from audit (and error) queues. A single instance of ServiceControl can connect to a single audit and error queues (in a single transport type). If you install multiple ServiceControl instances that collect auditing and error data form multiple queues, you can use serviceInsight to connect to each of the ServiceControl instances. Currently (in beta) ServiceInsight supports one connection at a time, but you can easily switch between connection or open multiple instances of ServiceInsight, each connecting to a different ServiceControl instance.
Question: I understand that messages processed in the error queue are moved to the error.log queue. What happens to the messages processed in the audit queue, i.e where do they go after the management service processes them.
Answer: audit messages are consumed, processed and stored in the ServiceControl instance auditing database (RavenDB).
Question: Where are the messages ultimately stored by the management process, i.e. are they stored in RavenDB and if so under what database.
Answer: Yes, they are stored (by default) in the embedded RavenDB database that is used by the management service (AKA "ServiceControl"). You can locate it under "C:\ProgramData\Particular\ServiceBus.Management"
Question: In addition, how do I remove or delete message conversations in the endpoint explorer. For example, let’s say I just want to clear everything out.
Answer: We will be adding full purge / delete support for this purpose in an upcoming beta update. for immediate purging of old messages, you can use the RavenDB studio based on the path specific above.
Please let me know of these answer your questions and do not hesitate to raise any other questions you may have!
Best regards,
Danny Cohen
Particular Software (NServiceBus Ltd.)

Need advice on suitable message queue for Storm spout

I'm developing a prototype Lambda system and my data is streaming in via Flume to HDFS. I also need to get the data into Storm. Flume is a push system and Storm is more pull so I don't believe it's wise to try to connect a spout to Flume, but rather I think there should be a message queue between the two. Again this is a prototype, so I'm looking for best practices, not perfection. I'm thinking of putting an AMQP compliant queue as a Flume sink and then pulling the messages from a spout.
Is this a good approach? If so, I want to use a message queue that has relatively robust support in both the Flume world (as a sink) and the Storm world (as a spout). If I go AMQP then I assume that gives me the option to use whatever AMQP-compliant queue I want to use, correct? Thanks.
If your going to use AMQP, I'd recommend sticking to the finalized 1.0 version of the AMQP spec. Otherwise, your going to feel some pain when you try to upgrade to it from previous versions.
Your approach makes a lot of sense, but, for us the AMQP compliant issue looked a little less important. I will try to explain why.
We are using Kafka to get data into storm. The main reason is mainly around performance and usability. AMQP complaint queues do not seem to be designed for holding information for a considerable time, while with Kafka this is just a definition. This allows us to keep messages for a long time and allow us to "playback" those easily (as the message we wish to consume is always controlled by the consumer we can consume the same messages again and again without a need to set up an entire system for that purpose). Also, Kafka performance is incomparable to anything that I have seen.
Storm has a very useful KafkaSpout, in which the main things to pay attention to are:
Error reporting - there is some improvement to be done there. Messages are not as clear as one would have hoped.
It depends on zookeeper (which is already there if you have storm) and a path is required to be manually created for it.
According to the storm version, pay attention to the Kafka version in use. It is documented, but, can really easily be missed and cause unclear problems.
You can have the data streamed to a broker topic first. Then flume and storm spout can both consume from that topic. Flume has a jms source that makes it easy to consume from the message broker. And a storm jms spout to get the messages into storm.

NServiceBus and DTC's

Im using the Distributor in NServiceBus and i've hit a wall of ignorence regarding DTC's.
Ive only used DTC once maybe twice before when doing stuff across processes and not a lot, so im very newbie with the whole DTC concept.
Question:
To ensure durable messaging with NSB, is it absolutely necessary to use DTC's?
The reason I ask is that i would expect NSB be able to detect any exception in, say a handler, and therefore react to the error by not removing the message from the queue. Hence no need for DTC. That would of course mean that any database or external service access in the handler would require the programmer to perform hers/his own rollbacks etc. And for that reason DTC's does seem like the best way to go. So Im all for DTC's (if I understand them right) as they from my perspective ensures that messages never are lost from the queues and message handling will never be corrupt as long a the handlers are implemented correctly and have other external services participate in the DTC.
But im not sure, especially since a well respected guy in the server maintenance team used the sentence "DTC's will cause you a world of pain!" when I ran the idea of activating DTC's on the database server by him... But he have yet to come with argument as to why im in for so much pain with DTC's... :/.
Could someone with a good understanding of DTC's and NSB please help me clearify whether im completely off in my understanding of DTC's and whether there's some big pitfall I have completely missed with DTC's?
Kind regards
NServiceBus distributor and the use of DTC in NServiceBus don't have anything to do with one another. DTC will be used by NServiceBus whether you're using the distributor or not.
NSB distributor workers (and even the individual worker threads on a single box when the NSB distributor isn't used) don't enlist one another in distributed transactions. Let me reiterate, you will never see two NSB worker threads in a single DTC transaction. Each worker thread starts a transaction against a local queue and then adds a (likely remote) database to the transaction (which makes it distributed)
There's a nice illustration of the concept here
I don't think you're missing any big pitfalls. I'd just decouple the two concepts, NSB distributor and how distributed transactions are used by NSB

Is there a way to see what subscriptions exist currently for NServiceBus

I am concerned with my NServiceBus solution.
I have a "MessageHub" that publishes some very important messages. But sometimes it loses track of its subscriptions and just discards the message because it thinks no one is listening.
I have tried turning on "NServiceBus.Integration" to store the subscriptions. But despite that, I still have issues with bad start up order where it thinks nothing is listening.
Is there a way to debug this process? Try to figure out why it is getting confused?
I don't even know a way to look at what subscriptions it "thinks" it has...
I went with NServiceBus because it is not supposed to lose data ever. Now I am losing large chucks. I know it is a config issue, but it is causing much grief.
What is probably happening in your case is that you are using MSMQ for subscription storage. Even though it's possible for subscriptions to endure for a while, using MSMQ to store things long term is always going to be volatile.
For durable subscriptions storage (which survive "forever") you should be using SQL server as your subscription storage.
Note: You can always view your current subscriptions whether you are using sql or msmq to store them. In SQL just look in the subscriptions table and for msmq look in the publisher's subscription queue.
UPDATE
Since version 3 I have been using RavenDb which is the default.
In my experiance, to get the subscriptions assigned correctly, one should first start the EventHandler projects and then when they are all idle, start the CommandHandlers (Publishers).
You can see what messages are being Subscribed to using Service Bus MQ Manager, it has a dialog listing all "messages" and their subscribers/publishers. A side project of mine, its free and open sourced.
http://blog.halan.se/page/Service-Bus-MQ-Manager.aspx

Should I use MSMQ or SQL Service Broker for transactions?

I've been asked by my team leader to investigate MSMQ as an option for the new version of our product. We use SQL Service Broker in our current version. I've done my fair share of experimentation and Googling to find which product is better for my needs, but I thought I'd ask the best site I know for programming answers.
Some details:
Our client is .NET 1.1 and 2.0 code; this is where the message will be sent from.
The target in a SQL Server 2005 instance. All messages end up being database updates or inserts.
We will send several updates that must be treated as a transaction.
We have to have perfect message recoverability; no messages can be lost.
We have to be asynchronous and able to accept messages even when the target SQL server is down.
Developing our own queuing solution isn't an option; we're a small team.
Things I've discovered so far:
Both MSMQ and SQL Service Broker can do the job.
It appears that service broker is faster for transactional messages.
Service Broker requires a SQL server running somewhere, whereas MSMQ needs any configured Windows machine running somewhere.
MSMQ appears to be better/faster/easier to set up/run in clusters.
Am I missing something? Is there a clear winner here? Any thoughts, experiences, or links would be valued. Thank you!
EDIT: We ended up sticking with service broker because we have a custom DB framework used in some of our client code (we handle transactions better). That code captured SQL for transactions, but not . The client code was also all version 1.1 of .NET, so we'd have to upgrade all the client code. Thanks for your help!
Having just migrated my application from Service Broker to MSMQ, I would have to vote for using MSMQ. There are several factors to take into account, but most of which have to do with how you are using your data and where the processing lives.
If processing is done in the database? Service Broker
If it is just data move? Service Broker
Is processing done in .NET/COM code? MSMQ
Do you need remote distributed transactions (for example, processing on a box different than SQL)? MSMQ
Do you need to be able to send messages while the destination is down? MSMQ
Do you want to use nServiceBus, MassTransit, Rhino-ESB, etc.? MSMQ
Things to consider no matter what you choose
How do you know the health of your queue? Both options handle failover differently. For example Service Broker will disable your queue in certain scenarios which can take down your application.
How will you perform reporting? If you already use SQL Tables in your reports, Service Broker can easily fit in as it's just another dynamic table. If you are already using Performance Monitor MSMQ may fit in nicer. Service Broker does have a lot of performance counters, so don't let this be your only factor.
How do you measure uptime? Is it merely making sure you don't lose transactions, or do you need to respond synchronously? I find that the distributed nature of MSMQ allows for higher uptime because the main queue can go offline and not lose anything. Whereas with Service Broker your database must be online or else you lose out.
Do you already have experience with one of these technologies? Both have a lot of implementation details that can come back and bite you.
No mater what choice you make, how easy is it to switch out the underlying Queueing technology? I recommend having a generic IQueue interface that you write a concrete implementation against. This way the choice you make can easily be changed later on if you find that you made the wrong one. After all, a queue is just a queue and should not lock you into a specific implementation.
I've used MSMQ before and the only item I'd add to your list is a prerequisite check for versioning. I ran into an issue where one site had Win 2000 Server and therefore MSMQ v.2, versus Win 2003 Server and MSMQ v3. All my .NET code targeted v.3 and they aren't compatible... or at least not easily so.
Just a consideration if you go the MSMQ route.
The message size limitation in MSMQ has halted my digging in that direction. I am learning Service Broker for the project.
Do you need to be able to send messages while the destination is down? MSMQ
I don't understand why? SSB can send messages to disconnected destination without any problem. All this messages going to transmission queue and would be delivered when destination stay reachable.