Archiving Windows Server Service Bus messages - servicebus

The MSDN documentation for the BrokeredMessage.Complete method (http://msdn.microsoft.com/en-us/library/microsoft.servicebus.messaging.brokeredmessage.complete.aspx) describes the method as this: "Completes the receive operation of a message and indicates that the message should be marked as processed and deleted or archived."
In my use of this method I've only seen the message deleted once it is processed. This is the one and only instance I've seen in the MSDN documentation, blogs, or anywhere else about Service Bus being capable of archiving old messages.
I could archive the message myself as part of my code that reads and processes a message and then marks it complete. But is it possible to make Windows Server Service Bus archive completed messages for me? If so, how do you turn on and configure this feature?
In case the difference matters, I am using the locally hosted Windows Server Service bus, not the Azure version.

No, Service Bus doesn't archive your messages. I'm going to follow up w/ the documentation folks on what that was supposed to express.

Related

How to use NServiceBus with MSMQ

I am experimenting the new version of NServicsBus. I find following step by step sample on particular site.
https://docs.particular.net/samples/step-by-step/
Can any one tell me how to configure MSMQ for Transport. Here is my scenario.
Client create message
Client message should be stored in MSMQ
Server Application running on same machine which subscribe the message.
Server handler get message from MSMQ and process it further. i.e Store in DB or send to other web service.
Retry to process message if it does not worked first time
after 3 retries send message to error queue
How do i configure this sample to use MSMQ for my scenario.
Helpful information to include
Product name:NServiceBus.Core
Version: 6.3.4
Stacktrace:
Description:
Did you know that we have released a LearningTransport and LearningPersistence just for purposes like these? Have a look at it here.
Having said that, the transport swapping should be rather seamless so even if you have setup a small PoC using this transport/persistence, you can change it to MSMQ or other production-ready transports/persistence when you go live.
Again, as stated in the documentation page and as the name suggests, this is not for use in production.
I would recommend you walk through this.
https://docs.particular.net/tutorials/intro-to-nservicebus/
Will answer your questions, and future ones you have.

NserviceBus - What happens to a message if the server is offline

I went thought NServiceBus documentation including the durable messaging one. What I understand is that when the server is offline the messages continue to go into the server's input queue which get picked up when server comes back online.
But what if the server is completely down and the input queue is not accessible?
I'm using Bus.Send from the client.
It depends on what transport you're using.
In the case of a brokered message queue, like Azure Service Bus, as long as that service is available, the fact the machine that will eventually retrieve the messages is offline is irrelevant, as that machine is simply asking the external queuing service for messages. The same goes for a transport like SQL Server.
In the case of a transport like MSMQ, which is a store a forward style queue, the messages will remain in a local outgoing queue until the remote machine becomes available.
Can you double check that you are looking in the correct spot? If you aren't getting an error out of NServiceBus when you Send, then MSMQ is installed. If it can't be reached or the service is stopped you should get errors.
The Outbound queues are in a different place as illustrated here:
http://blogs.msdn.com/cfs-filesystemfile.ashx/__key/communityserver-components-postattachments/00-09-06-31-16/outgoingempty.JPG
As RMD indicated, this is an advantage of the store and forward MSMQ transport.. the local outbound queue should just stack these up until the remote server is available.
Thx.
Joe

NServiceBus - subscriber input queue different from endpoint name

I'm hosting NServiceBus in my own application to act as a subscriber.
I have 4 projects in the solution:
1. Contracts - declare the event interfaces
2. Host - class library with API to start the bus.
3. Handlers - here the event handlers are implemented.
4. Console application to run it all.
I see that the endpoint name is set correctly according to the console application name which is what I want and the queues are created accordingly.
I successfully subscribe to the publisher events.
The problem:
When the publisher tries to send a message to the subscriber - it tries to send to a queue that is named according to the event handlers namespace and not the endpoint name.
The exception that I get is that the publisher could not find the subscribers input queue.
Just for a sanity check, I manually created the input queue that is named according to the handlers namespace and indeed I started to receive the events.
So, is this a bug in NServiceBus or have I missed something very crucial?
Thanks....
I found the problem and it was mine...
The publisher still had old subscribers in its Raven DB so it tried to publish the events also to these queues which weren't there anymore...
To make my life easier, I configured the subscriptions to be stored using MSMQ.
I had a similar issue. I renamed my endpoints but when calling Publish() it was still trying to send to the old queues. I went to localhost:8080 (RavenDB) and deleted all documents and databases but still had the same issues. Restarting the RavenDB service solved the issue, so it must cache them in memory or something.

Can't read from remote transactional private queue using WCF in workgroup mode (can do using System.Messaging !)

I have spent days reading MSDN, forums and article about this, and cannot find a solution to my problem.
As a PoC, I need to consume a queue from more than one machine since I need fault tolerance on the consumers side. Performance is not an issue since less than 100 messages a day should by exchanged.
I have coded two trivial console application , one as client, the other one as server. Using Framework 4.0 (tested also on 3.5). Messages are using transactions.
Everything runs fines on a single machine (Windows 7), even when running multiple consumers application instance.
Now I have a 2012 and a 2008 R2 virtual test servers running in the same domain (but don't want to use AD integration anyway). I am using IP address or "." in endpoint address attribute to prevent from DNS / AD resolution side effects.
Everything works fine IF the the queue is hosted by the consumer and the producer is submitting messages on the remote private queue. This is also true if I exchange the consumer / producer role of the 2012 and 2008 server.
But I have NEVER been able to make this run, using WCF, when the consumer is reading from remote queue and the producer is submitting messages localy. Submition never fails, my problem is on the consumer side.
My wish is to make this run using netMsmqBinding, but I also tried using msmqIntegrationBinding. For each test, I adapted code and configuration, then confirmed this was running ok when the consumer was consuming from the local queue.
The last test I have done is using WCF (msmqIntegrationBinding) only on the producer (local queue) and System.Messaging.MessageQueue on the consumer (remote queue) : It works fine ! => My goal is to make the same using WCF and netMsmqBinding on both sides.
In my point of view, I have proved this problem is a WCF issue, not an MSMQ one. This has nothing to do with security, authentication, firewall, transport, protocol, MSMQ version etc.
Errors info using MS Service Trace Viewer :
Using msmqIntegrationBinding when receiving the message (openning queue was ok) : An error occurred while receiving a message from the queue: The transaction specified cannot be imported. (-1072824242, 0xc00e004e). Ensure that MSMQ is installed and running. Make sure the queue is available to receive from.
Using netMsmqBinding, on opening the queue : An error occurred when converting the '172.22.1.9\private$\Test' queue path name to the format name: The queue path name specified is invalid. (-1072824300, 0xc00e0014). All operations on the queued channel failed. Ensure that the queue address is valid. MSMQ must be installed with Active Directory integration enabled and access to it is available.
If someone can help to find why my configuration cannot be handled by WCF, a much elegant and configurable way than Messaging, I would greatly appreciate !
Thank you.
You may need to post you consumer code and config to give more of an idea but it could be the construction of the queue name - e.g.
FormatName:DIRECT=TCP:192.168.0.2\SomeQueue
There are several different ways to connect to a queue and it changes when you are remote or local as well.
I have found this article in the past to help:
http://blogs.msdn.com/b/johnbreakwell/archive/2009/02/26/difference-between-path-name-and-format-name-when-accessing-msmq-queues.aspx
Also, MessageQueue Constructor on MSDN...
http://msdn.microsoft.com/en-us/library/ch1d814t.aspx

Windows Azure Queues, WCF, MSMQ integration

I have a scenario where I need a desktop console app to communicate with a Windows Azure Queue... the most important thing is that the message is received by the server eventually. Also, the desktop app may be disconnected from the Internet sometimes. In the traditional WCF+MSMQ approach you'd be able to send a message which would be cached in MSMQ until MSMQ could reach the Server's MSMQ and send the message. What's the equivalent when Windows Azure is the server-side?
Is it possible for the same approach to be used, where MSMQ just communicates with a Windows Azure Queue rather than an MSMQ on a Windows Server?
Maybe Windows Azure Queue is the wrong approach? I have heard about something called message buffer, but don't know what this is (yet!).
thanks for your help
Kris
You could write an MSMQ listener service that finishes moving the message to the Azure queue when the connection to the internet has been reestablished. I don't think this would be too difficult.
Update
Perhaps my answer wasnt clear. Based on the question the client is occasionally connected to the internet so you need a way to park the message until the intertubes get untangled. Using Windows the easiest way to do this is to put the message in an MSMQ local queue. YOu then have a service monitoring that queue. If there is a message and it can get to the service hosted in the cloud it sends the message. Once the message has been sent it can be deleted from the queue.
In order to queue a message to Azure Queue Storage you have to be connected to the Internet. If you want to handle disconnected scenarios, that is totally up to you. I would keep the solution very simple and use a local storage such as SQL Server Compact and then send the messages as soon as there's connectivity, maybe with the aid of a Windows Service (so that you don't need to run the desktop app).
You can do this with the Azure AppFabric Service Bus Message Buffers - there is no need to use a Queue. Check out the related sample downloads on the following site: http://www.idesign.net/idesign/DesktopDefault.aspx?tabindex=5&tabid=11 - they should answer your questions much better than I can.
Regards