I have a database which has a number of queues which will contain messages from a 3rd party product. I would like to import these messages onto my Bus for processing and believe that I can achieve this with NServiceBus but I would like to host all the message importing within a single Windows Service that will have configuration on the database queue to monitor.
The processing steps are as follows:
1) Import onto Bus
2) transform into message
3) Send Bus message
Each NServiceBus would be configured to poll the database queue periodically. When a message arrives it will perform a Bus.SendLocal to perform step 1.
The NSB host would then receive process with a message handler. Within this message handler the transformation of the message would occur. Finally, the actually Bus message would be sent. Usual config would deal with the destination host.
I would like to understand whether multiple NSB hosts can be placed within a single Windows Service and if there are any issues associated with this? I believe that all hosts would need to share the same configuration (I am happy of this restriction), is that correct?
If multiple hosts is a 'no-no', my alternative is to have a Window Service with a Bus reference (singleton). A TPL task would monitor the database queue and then use the Bus to import the database message. A separate NServiceBus would handle or the imported database messages and perform the transformation and sending to other hosts.
Sorry about the length of the question.
You should be able to use a Satellite to perform those kinds of DB queries and then forward onto the bus.
What do you mean by "hosts"? Do you mean can one endpoint handle many different message types?
You can handle as many different message types as you want in a single host. The only restriction is that they will share the same queue, which means that all message types will be given the same priority (which is only a problem in very specific cases).
Related
I'm working with a product suite which uses RabbitMQ as a back end for service bus messaging. Many of the clients use software (NeuronESB) which is supposed to automatically configure exchanges, queues and channels as needed. Somewhere in the system exchanges in Rabbit are being deleted and not re-created, resulting in unexpected issues. Because of the size of the system and closed source nature of at least one of the service bus clients, an audit of code has been unsuccessful in determining the source of the deletion of these exchanges.
I have tried using the firehose functionality of Rabbit, but that only provides the messages being sent through Rabbit, not the internal activities I need.
What methods are available for logging the creation and deletion of exchanges in RabbitMQ? Ideally I would like to know the date, time and client IP of the deleter, but even just getting the date and time would allow me to narrow my search of logs to help find the offender.
Try Events Exchange plugin that should do the trick.
If not working for some reason, the last resort I can think of:
Get a test environment with less clients/messages if you app is busy, then analyse your traffic with wireshark (it can understand amqp) to filter out requests to delete exchange.
I have an application with RabbitMQ at the backend. So I want to develop custom 3rd party analysis code which it connects application queues on RabbitMQ and collect data. So my issue is I want to be sure both application and my code do not lose any data from rabbitmq.
If it is possible how can I configure RabbitMQ queues? I have administrative access on RabbitMQ.
I hope it's not code of producer issue because I don't have access the application code
Thanks for your help
Change the current exchange/queue mapping to allow for message replication
At the moment we can simplify that existing producer sends a message to existing exchange, that routes the message to some queue, from which the messages are now consumed:
[producer-app] ---> existing-exchange ---> existing-queue ---> [existing-consumer]
Now, what you want to have a following design, with new consumer consuming the same messages:
[producer-app] ---> existing-exchange ---> existing-queue ---> [existing-consumer]
\--> new-queue --------> [your-consumer]
You might need to change configuration of existing-exchange to allow replication of your message - for example direct and fanout will create the same message on each of the queues.
Depending on your application it might be quite easy to perform without changes in producer, but you need to be aware of possible pitfalls:
producer might re-declare exchanges/queues/bindings from time to time, and throw exceptions if the current state cannot be change to its request (this might happen if you change exchange's type)
you need to manage the new-queue on your own (preferably from your consumer artifact), as it is going to receive all the messages; in case your consumer shuts down, the queue is not going to disappear unless it is made exclusive or has TTL set
I have NServiceBus running in a single process, but I would like to send Message A, but receive only Message B. However, I think because Message Endpoint Mappings are used for both sending and receiving, the process is trying to handle both messages - A and B. Any way around this issue? Both messages go onto the same queue, based on the fact that a single process can only listen to a single queue NSB limitation.
You could use two different AppDomains.
The statement
a single process can only listen to a single queue
Is not actually correct. It is more correct to say
a single appdomain can only listen to a single queue
Since you can have multiple appdomains per process you could have a since processes listening to multiple queues
I'm interacting with ActiveMQ via STOMP. I have one application which publishes messages and a second application which subscribes and processes the messages.
If I am writing messages to a queue I can be certain that, if I have two consumers, each message will only be processed once (because when a message is completed it is removed from the queue) - but is this functionality available from a topic?
For example; I have a third application which is a logger. I want the logger to receive each message the publisher emits, but I also want exactly one of two (or three or four etc…) of the processors to receive the message too.
Is this possible?
EDIT
It occurs to me that a good way of doing this would be to have a topic which the publisher writes to, and a queue which the processors listen to, with something pushing every message from the topic onto the queue. Can ApacheMQ do this internally?
You can do this internally in ActiveMQ using Mirrored Queues and also use Virtual Topics for some other advanced routing semantics. If you want to have the option of other EIP type messaging patterns then I'd recommend you look into Apache Camel which provides a whole host of EIP pattern functionality.
is it possible to publish a message from 1 logical service that is deployed to 2 physical locations?
how would the config file look like?
you cannot add a message in your subscriber 2 times. but you must if you want to subscribe to 2 queues.
Yes, it's very possible. We're doing it right now. The trick is to have either a shared or replicated subscription store. Here's how it works:
The subscription request (as defined in your subscriber's application configuration file) is sent to an endpoint of the publisher.
The publisher adds the request to its subscription store which is often a relational database.
If the database is shared/replicated all publisher endpoints will know about the new subscriber.
All publisher endpoints will be able to publish and the subscriber will be able to receive the desired message.
That is excactly what the db subrcription storage is meant to solve. Just configure both physical publishers to share the same sub.db and you should be fine. Then have your subscribers subscribe to one of them.
I believe this is not possible. Anyway you cvan you some kind of dispatcher in the middle.
The Publisher send the Message directly to the dispatcher using IBus.Send() which in turn publishs using IBus.Publish().