I would like to know how, if possible, a client app (winform) sends NServicebus command A to be processed by a MSMQ queue and command B to be processed by a Azure storage queue or Azure service bus? If not, how may I get around of it?
Since this question was asked, there is now a transport bridge which is specifically for this scenario: bridging messages between two different transports.
Will this help? https://docs.particular.net/samples/azure/azure-service-bus-msmq-bridge/
Common examples include:
A hybrid solution that spans across endpoints deployed on-premises and in a cloud environment.
Departments within organization integrating their systems that use different messaging technologies for historical reasons.
Traditionally, such integrations would require native messaging or relaying. Bridging is an alternative, allowing endpoints to communicate over different transports without a need to get into low-level messaging technology code. With time, when endpoints can standardize on a single transport, bridging can be removed with a minimal impact on the entire system.```
Related
I am in a situation where I can use Service Fabric (locally) but cannot leverage Azure Service Bus (or anything "cloud"). What would be the corollary for queuing/pub-sub? Service Fabric is allowed since it is able to run in a local container, and is "free". Other 3rd party messaging infrastructure, like RabbitMQ, are also off the table (at the moment).
I've built systems using a locally grown bus, built on MSMQ and WCF, but I don't see how to accomplish the same thing in SF. I suspect I can have SF services use a custom ICommunicationListener that exposes msmq, but that would only be available inside the cluster (the way I understand it). I can build an HTTPBridge (in SF) in front of those to make them available outside the cluster, but then I'd lose the lifetime decoupling (client being able to call a service, using queues, even if that service isn't online at the time) since the bridge itself wouldn't benefit from any of the aspects of queuing.
I have a few possibilities but all suffer from some malady that only exists because of SF, locally. Also, the same code needs to easily deploy to full Azure SF (where I can use ASB and this issue disappears) so I don't want to build two separate systems just because of where I am hosting it in some instances.
Thanks for any tips.
You can build this yourself, for example like this. This uses a BrokerService that will distribute message-data to subscribed services and actors.
You can also run a containerized queuing platform like RabbitMQ with volumes.
By running the queue system inside the cluster you won't introduce an external dependency.
The problem is not SF, The main issue with your design is that you are coupling architectural requirements to implementations. SF runs on top of VirtualMachines, in the end, the only difference is that SF put the services in those machines, using another solution you would have an Agent Deploying these services in there or doing a Manual deployment. The challenges are the same.
It is clear from the description that the requirement in your design is a need for a message queue, the concept of queues are the same does not matter if it is Service Bus, RabbitMQ or MSMQ. Each of then will have the basic foundations of queues with specifics of each implementation, some might add transactions, some might implement multiple patterns, and so on.
If you design based on specific implementation, you will couple your solution to the implementation and make your solution hard to maintain and face challenges like you described.
Solutions like NServiceBus and Masstransit reduce a lot of these coupling from your code, and if you think these are not enough, you can create your own abstraction. Then you use configurations to tied your business logic to implementations.
Despite the above advice, I would not recommend you using different
solutions per environment, because as said previously, each solution
has it's own implementations and they might not assimilate to each other, as example, you might face issues in
production because you developed against MSMQ on DEV and TEST
environments, and when deployed to Production you use ServiceBus, they
have different limitations, like message size, retention period and son
on.
If you are willing to use MSMQ, you can add MSMQ to the VMs running your cluster and connect from your services without any issue. Take a look into this SO first: How can I use MSMQ in Azure Service Fabric
I have explored the web on MULE and got to understand that for Apps to communicate among themselves - even if they are deployed in the same Mule instance - they will have to use either TCP, HTTP or JMS transports.
VM isn't supported.
However I find this a bit contradictory to ESB principles. We should ideally be able to define EndPoints in and ESB and connect to that using any Transport? I may be wrong.
Also since all the apps are sharing the same JVM one would expect to be able to communicate via the in-memory VM queue rather than relying on a transactionless HTTP protocol, or TCP where number of connections one can make is dependent on server resources. Even for JMS we need to define and manage another queue and for heavy usage that may have impact on performances. Though I agree if we have distributed and clustered systems may be HTTP or JMS will be only options.
Is there any plan to incorporate VM as a inter-app communication protocol or is there any other way one Flow can communicate with another Flow Endpoint but in different app?
EDIT : - Answer from Mulesoft
http://forum.mulesoft.org/mulesoft/topics/concept_of_endpoint_and_inter_app_communication
Yes, we are thinking about inter-app communication for a future release.
Still is not clear when we are going to do it but we have a couple of ideas on how we want this feature to behave. We may create a server level configuration in which you can define resources to use in all your apps. There you would be able to define a VM connector and use it to send messages between apps in the same server.
As I said, this is just an idea.
Regarding the usage of VM as inter-app communication, only MuleSoft can answer if VM will have a future feature or not.
I don't think it's contradictory to the ESB principle. The "container" feature is pretty well defined in David A Chappell's "Enterprise Service Bus book" chapter 6. The container should try it's best to keep the applications isolated.
This will provide some benefits like "independently deployable integration services" (same chapter), easier clusterization, and other goodies.
You should approach same VM inter-app communications as if they where between apps placed in different servers.
Seems that Mule added in 3.5 version, a feature to enable communication between apps deployed in the same server. But sharing a VM connector is only available in the Enterprise edition.
Info:
http://www.mulesoft.org/documentation/display/current/Shared+Resources#SharedResources-DefiningDomains
Example:
http://blogs.mulesoft.org/optimize-resource-utilization-mule-shared-resources/
(Ha! see what I did there?)
I have a system whereby a server pushes information from a central DB out to many client DBs (cross-domain via internet), and periodically they call services on the server. This has to withstand intermittent connections, ie queue messages.
I've created a development version using duplex MSMQ that I'm trying to apply transport security. From the reading I've done, it appears that:
MSMQ uses AD Windows Security, which is irrelevant cross-domain.
Due to the nature of duplex, each client is effectively a server as well. That means I need to pay $1200 every time I install the system with another client if I want to use SSL.
Are these facts correct? Am I really the only person who needs to secure services that are queued AND cross-domain AND duplex?
"MSMQ uses AD Windows Security, which is irrelevant cross-domain."
No, MSMQ uses Windows security which includes local accounts and, if available, domain accounts. MSMQ also uses certificates, if available.
"Due to the nature of duplex, each client is effectively a server as
well."
MSMQ doesn't use a client-server model. All MSMQ machines are effectively peers, sending messages between each other. For the $1,200 payment, are you referring to the certificate needed by the web service for sending MSMQ over HTTPS?
This is the first time I've seen anyone want to push secure messages over HTTPS to multiple destinations.
You may, in fact, be the only person in the world right now who wants to do this.
Let me embellish.
Not many companies are using MSMQ (in the grand scheme of things).
Of those that are, the vast majority are using only private queues, a small minority only use public queues.
Of those that are, only a handful are using it across the internet.
Of those that are, perhaps one is using it to exchange messages in both directions (that would be yours).
But that aside, it seems to me your main challenge will be using MSMQ as a secure transport layer over the internet. Although I have never had to do this, here are a couple of articles:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms701477(v=vs.85).aspx
http://msdn.microsoft.com/en-us/magazine/cc164041.aspx
Sorry couldn't be of more help.
I understand to an extent that it helps applications communicate regardless of their location. Why is it important and what is an example of a real-world use of WCF?
WCF is a generic communication mechanism that allows you to setup generic client/host communication between two parties. The neat thing about WCF is that is allows you to configure service properties such as transport (http/pipes/tcp/Tibco EMS), security models (any of the W3C standards), compression, encoding, timeouts, etc, without changing ANY code. That is powerful. Best of all, you can configure it so that you can have a service in C# and a client in Java (or any other language or the other way around), as long as they both talk using the same mechanisms.
You can create a standard HTTP SOAP web service using WCF and one day decide to switch it to use the faster named pipes for local communication. You can create web services that talk over TibcoEMS and have easy failover on the queue level. You can create a file streaming web service that distributes all kinds of images/videos to your application.
Here Are some brain dump i think might be useful to understand the whole scenario.
Reason of Creating WCF : Background
Modern Application[Distributed Application] development we use different architechtures and technologies for communication
i.e:
COM+
.NET Enterprise Services
MSMQ
.NET Remoting
Web Services
As there are various technologies. they all have different architechtures. so learning all them are tricky and tedious.
one need to focus on each technologies to develop rather than the application business logic
so microsoft unifies the capabilities into single, common, general service oriented programming model for Communication. WCF provides a common approach using a common API which developers can focus on their application rather than on communication protocol.
Now-a-days we call it WCF.
N.B: image collected from - http://www.codeproject.com/Articles/255114/Windows-Communication-Foundation-Basics
What Exactly WCF Service Stands For?
WCF lets you asynchronus messages transform one service endpoint to another.
The Message Can be simple as
A Single Character
A word
sent as XML
complex data structure as a stream of binary data
Windows Communication Foundation(WCF) supports multiple language & platforms.
WCF Provides you a runtime environment for your services enabling you to expose CLR types as Services and to consume other Services as CLR Types.
A few sample scenarios include:
A secure service to process business transactions.
A service that supplies current data to others, such as a traffic report or other monitoring service.
A chat service that allows two people to communicate or exchange data in real time.
A dashboard application that polls one or more services for data and presents it in a logical presentation.
Exposing a workflow implemented using Windows Workflow Foundation as a WCF service.
A Silverlight application to poll a service for the latest data feeds.
Why on Earth We Should Use WCF?
from a Code Project Article, thanks to #Mehta Priya I found the following Scenarios to illustrate the concept. Let us consider two Scenario:
The first client is using java App to interact with our Service. So for interoperability this client wants the messages in XML format and the Protocol to be HTTP.
The Second client uses .NET so far better performance this clients wants messages in binary format and the protocol to be TCP.
Without WCF Services
now for the stated scenarios if we don't use WCF then what will happen let's see with the following images:
Scenario 1 :
Scenario 2:
These are two different technologies and have completely differently programming models. So the developers have to learn different technologies
so to unify & bring all technologies under one roof. Microsoft has come with a new programming model called WCF.
How WCF Make things easy ?
one implement a service and he/she can configure as many end points as it required to support all the client needs .
To support the above 2 client requirements
-we would configure 2 end points
-we can specify the protocols and message formats that we want to use in the end point of configuration
References:
WCF : What , Why and When https://vishalnayan.wordpress.com/2010/12/31/wcf-what-why-when/
Why we use WCF Service? http://www.codeproject.com/Tips/815742/Why-We-Use-WCF-Service-and-Sample-of-WCF-Service
What Is Windows Communication Foundation https://msdn.microsoft.com/en-us/library/ms731082(v=vs.110).aspx
Windows Communication Foundation Basics http://www.codeproject.com/Articles/255114/Windows-Communication-Foundation-Basics
There's little to add to the responses so far, especially the one from "siz".
One thing to add is that WCF is the current way to do web services on the .NET platform. It's not the "new" way, it's the current way. ASMX web services are the old and just barely maintained way. One Microsoft employee has publicly stated that only critical security fixes will be made to the ASMX platform, so if you intend for your services to be useful more than a year from now, don't use ASMX.
In addition to the typical "web service" use cases, WCF handles atypical cases, like binary communication over named pipes, message queues, etc. To a very large extent, the service you write to support something simple like SOAP over SSL can also support these other protocols, with no changes to the code.
To answer the "real world" bit, I'm just finishing up a dispatch system by which a Visual Basic 6.0/access alarm receiver, a WPF/SQL ERP system and an iPhone application all share information to schedule and execute jobs.
Essentially the use case is where you want two separate applications to talk to each other somehow and their locations are unknown (could be same machine (but different application domain), same network or on the other side of the internets)
You can easily embed it into a Windows Forms application. That was a nice thing to discover. It is so much easier than .NET Remoting too.
There are a number of reasons why it is advantageous over classic ASP.NET web services (.asmx).
A couple of these off the top of my head are:
The ability to have multiple bindings for the same service call means the message doesn't have to serialise into XML and back if you simply want to communicate inside a web farm.
The way contracts are defined is much more forgiving when it comes to multiple versions of the same contract.
I am looking at using MSMQ as a solution to do asynchronous execution in my upcoming project. I want to know the differences between using WCF and frameworks like MassTransit or even hand written MSMQ client to place/read task off MSMQ.
Basically the application will be several websites (internal through LAN or external through the Internet) reading/writing data through a service layer (be it WCF or normal web service). Then this service layer will do one of two things: 1. write data to database 2. and/or trigger the background process by placing a message in the queue. 3. obviously it can also retrieve data from database. The little agent (a windows service) on the other side of the queue will monitor the queue and execute based on the task command.
This architecture will be quite easy to scale (add more queues and agents) and easy to implement compared to RPC or distributed execution or whatever. And the agent processing doesn’t need to be real time. And the agent and service layer are separate applications except they share the common domain objects and Repositories etc.
What do you think? Architecture suggestions for the above requirements are welcomed. Thank you!
WCF adds an abstraction over MSMQ. In fact, once you define compatible contracts (operations must be OneWay), you can switch out MSMQ in the config, transparently. (For instance, you could switch to normal HttpWS or a NetTcp binding.)
You should evaluate the other WCF benefits, like security and so on, to see how those fit in with your needs. Again, they should be reasonably transparent of the fact you're using MSMQ underneath. For instance, adding SOAP security and so on should "just work", independent of using MSMQ.
(Although, IIRC, you still need to login to the desktop on each machine that uses MSMQ, with the service account that will use MSMQ, to generate the certificate in the machines local profile. And then, it doesn't work very well from IIS6, since user profiles aren't loaded. A real pain in general, but nothing to do with WCF specifically.)
Apart from that:
Have you looked at SQL Server Service Broker? After using MSMQ + WCF and SSSB, I think that SSSB is vastly easier to configure and manage. SSSB works with T-SQL commands over any SQL client (I use it from Mono, on Linux, with transactions). It'll also give you transactional send/receive, even remotely (I think MSMQ 4 now allows this). It really takes a lot of the pain away from message queuing, and if you're using SQL Server already...
SSSB is often overlooked since the SQL Management Studio doesn't have GUI designers for it all, but it isn't hard and is a great option. The one downside is that if you want local send capability (i.e., queue message when network is down), you'll need to run a local SQL Express instance.
Your architecture seems sound and reasonable. However you should consider using the WCF net MSMQ transport over hand coded MSMQ classes. WCF wraps this common functionality into a nice programming model. Also I believe there is some improvements in the protocol used by wcf compared to basic System.Messaging
Have a look at the value-add over plain MSMQ:
http://readthedocs.org/docs/masstransit/en/latest/overview/valueadd.html
In summary, you get a lot of messaging concepts clearly presented in the API with MassTransit; to an extent you wouldn't have if you hand-coded it or used WCF.