what is/are the right WCF messaging function to use in my project? - wcf

I am novice in WCF and I have a project that needs to be migrated into WCF communication base with the client/server and server to server architecture.
My question is what is the right messaging function that I need for this project that insure the security of data across the network ,reliable connection and speed exchange of data.
I was able to find out the WCF has numerous messaging function.
Below is the architecture of my project:
Note: The clients should be simultaneously updated by both data processing and feed source servers. And clients also sends simultaneous requests to the servers while feeds are still being supplied by the feed source server.
I would be appreciate any suggestion or comments.

My first question is why are you putting the Connection Manager (CM) component in-between your clients and the services which they want to use? What is the job it does which means it needs to be right in the middle of everything?
This ultimately means that your CM component will have to handle potentially high volumes of bi-directional traffic across potentially different transport bindings and introduces a single failure point.
What if client A wants only to receive messages from the Feed Source (FS) component? Why should client A have to deal with an intermediary when it just wants to send a subscription notification to receive updates from the FS?
Equally, what if client B wants to send a message to the Data Processing (DP) component? Surely it should just be able to fire off a message to DP?
I think the majority of what you want to do with this architecture can be achieved with one-way messaging, in which case you should use netMsmqBinding (assuming you are in a pure wcf environment).

Related

Angular 5 and Message Bus

I have a set of RESTful services that my Angular 5 client uses to perform CRUD and business operations for the application. These are a set of micro services and they use pub/sub message queues to communicate between them, e.g. when a user is created the user server publishes a UserCreated event to the message queue and subscribers can listen for this event and act upon it as required.
Now, this is all good but i was thinking that wouldn’t it be better if the Angular 5 application itself published the event to the message queue rather than making HTTP POST/PUT or DELETE and only make GET requests against the API?
So repeating the example above the Angular 5 client would publish a CreateUserEvent to the message bus (in my case cloud pub/sub), I could then have services subscribe to these events and act upon them. My RESTful services would then only expose GET /users and GET /user/:id for example.
I know that this is doable and I guess what I am describing is CQRS, but I am keen to understand if publishing events to a message bus from the UI is good practice?
The concept of Messaging Bus is very different than Microservices. Probably, the answer to your question lies in the way you look at these two, from architectural perspective.
A messaging bus(whether it is backend specific or frontend specific) is designed in such a way, that it serves the purpose of communication of entities within the confined boundary of an environment, i.e. backend or frontend.
Whereas on the other hand, microservices architecture is designed in such a way that, two different environments that may be backend-frontend or backend-backend, can "effectively" communicate.
So there is a clear separation of motivation behind both the concepts. Now, from your viewpoint, you may use a hybrid approach which might work, and it may also lead to interesting findings related to performance, architectural design or overheads as well.
Publishing directly from the client is possible, but the caveat is that it means that the client needs to have the proper credentials to publish. For this reason, it may be preferable to have the service do the publishing in response to requests sent from the clients.

Microservices Why Use RabbitMQ?

I haven't found an existing post asking this but apologize if I missed it.
I'm trying to get my head round microservices and have come across articles where RabbitMQ is used. I'm confused why RabbitMQ is needed. Is the intention that the services will use a web api to communicate with the outside world and RabbitMQ to communicate with each other?
In Microservices architecture you have two ways to communicate between the microservices:
Synchronous - that is, each service calls directly the other microservice , which results in dependency between the services
Asynchronous - you have some central hub (or message queue) where you place all requests between the microservices and the corresponding service takes the request, process it and return the result to the caller. This is what RabbitMQ (or any other message queue - MSMQ and Apache Kafka are good alternatives) is used for. In this case all microservices know only about the existance of the hub.
microservices.io has some very nice articles about using microservices
A message queue provide an asynchronous communications protocol - You have the option to send a message from one service to another without having to know if another service is able to handle it immediately or not. Messages can wait until the responsible service is ready. A service publishing a message does not need know anything about the inner workings of the services that will process that message. This way of handling messages decouple the producer from the consumer.
A message queue will keep the processes in your application separated and independent of each other; this way of handling messages could create a system that is easy to maintain and easy to scale.
Simply put, two obvious cases can be used as examples of when message queues really shine:
For long-running processes and background jobs
As the middleman in between microservices
For long-running processes and background jobs:
When requests take a significant amount of time, it is the perfect scenario to incorporate a message queue.
Imagine a web service that handles multiple requests per second and cannot under any circumstances lose one. Plus the requests are handled through time-consuming processes, but the system cannot afford to be bogged down. Some real-life examples could include:
Images Scaling
Sending large/many emails (like newsletters)
Search engine indexing
File scanning
Video encoding
Delivering notifications
PDF processing
Calculations
The middleman in between microservices:
For communication and integration within and between applications, i.e. as the middleman between microservices, a message queue is also useful. Think of a system that needs to notify another part of the system to start to work on a task or when there are a lot of requests coming in at the same time, as in the following scenarios:
Order handling (Order placed, update order status, send an order, payment, etc.)
Food delivery service (Place an order, prepare an order, deliver food)
Any web service that needs to handle multiple requests
Here is a story explaining how Parkster (a digital parking service) are breaking down their system into multiple microservices by using RabbitMQ.
This guide follow a scenario where a web application allows users to upload information to a web site. The site will handle this information and generate a PDF and email it back to the user. Handling the information, generating the PDF and sending the email will in this example case take several seconds and that is one of the reasons of why a message queue will be used.
Here is a story about how and why CloudAMQP used message queues and RabbitMQ between microservices.
Here is a story about the usage of RabbitMQ in an event-based microservices architecture to support 100 million users a month.
And finally a link to Kontena, about why they chose RabbitMQ for their microservice architecture: "Because we needed a stable, manageable and highly-available solution for messaging.".
Please note that I work for the company behind CloudAMQP (hosting provider of RabbitMQ).
The same question can be why REST is necessary for microservices? Microservice concept is not something new under moon. A long time distribution of workflow was used for backend engineering and asynchronous request processing, Microservice is the same component in a separated jvm which matches with S(single responsibility) in SOLID. What makes it micro SERVICE - is that it is balanced. And that is the all! Particularly (!), it can be REST Service on Spring Cloud/REST base, which is registered by Eureka, has proxy gateway and load balancing over Zuul and Ribbon. But it is not the whole world of microservices!By the way, asynchronous distributed processing is one of tasks which microservices are used for. Long time ago services(components) in separated JVM was integrated over any messaging and the pattern is known as ESB. Microservices are the same subjects the pattern. Due to fashion for Spring Cloud REST seems like it is the only way of microservices. Nope! Message based asynchronous microservice architecture is supported by Vertx https://dzone.com/articles/asynchronous-microservices-with-vertx, for example. Why not to use RabbitMQ as message channel? In this case load balancing can be provided by building RabbitMQ cluster. For example:https://codeburst.io/using-rabbitmq-for-microservices-communication-on-docker-a43840401819. So, world is much wide more.

Load balancing a room-based pub/sub application on Azure

I've got a working Silverlight/WCF application that I need to start thinking about scaling. An obvious target for scaling, of course, is Azure.
The key architectural feature of the application is that 2-10 Silverlight clients will join a given "room" (using a duplex Net.TCP connection), and any of those clients can then send a message (for instance, a chat message), which then needs to be pushed in real-time to every other client connected to the same room, using the underlying duplex WCF connection.
Right now, the way the WCF service works is basically to keep in-memory a list of sessions and the rooms that they're associated with, so that when a message from one session comes in, it can automatically send the message to every other session in the room.
This works fine for a single WCF server instance, but it gets complicated if you need to scale it so that multiple WCF instances are in play. If you use network-layer load balancing, of course, you would typically find that only some of the members of your room are on the same server you're on, which means that when it comes time to push out messages to all those members, only some of them would actually get notified.
Apart from Azure, I had been thinking that I would handle it via some sort of application-layer load balancing. For instance, the web server that each client downloads the Silverlight application from might do a primitive round-robin sort of load-balancing, i.e., "OK, everyone in room x, you use WCF instance 1. Everyone in room y, you use WCF instance 2." That sort of thing.
So I have two questions:
(1) Is there any other, better way to architect this, so as to be able to use network-layer load balancing rather than needing to make the application aware of the underlying infrastructure?
(2) If I have to do the application-layer load balancing, what's the best way to handle this in Azure? Do I have to use the IAAS (full VM's), or is there a way to do this using PAAS (worker roles)? My understanding is that it's not possible to independently address worker roles, which would make a roles-based approach difficult, if not impossible.
SignalR powered by the Azure Service bus, may work for you.
http://vasters.com/clemensv/2012/02/13/SignalR+Powered+By+Service+Bus.aspx

MSMQ between WCF services in a load balanced enviroment

I'm thinking of adding a queue function in a product based on a bunch of WCF services. I've read some about MSMQ, first I thought that was what I needed but I'm not sure and are considering to just put the queue in a database table. I wonder if somone here got some feedback on which way to go.
Basicly I'm planning to have a facade WCF service called over http. The facade service should only write all incoming messages to a queue to give a fast response to the calling system. The messages in the queue should then be processed by another component, either a WCF service or a Windows service depending om my choice of queue.
The product is running in a load balanced enviroment with 2 to n web servers.
The options I'm considering and the questions I got are:
To let the facade WCF write to a MSMQ and then have anothther WCF service reading from this queue to do the processing of the messages. What I don't feel confident about for this alternative from what I've read is how this will work in a load balanced enviroment.
1A. Where should the MSMQ(s) be placed? One on each web server? One on a separate server? Mulitple on a separate server? (not considering need of redundance and that data in rare cases could be lost and re-sent)
1B. How it the design affected if I want the system redundant? I'd like to be alble to lose a server (it never comes up online again) holding the MSMQ without losing the data in that queue. From what I've read about MSMQ that leaves me to the only option of placing the MSMQ on a windows cluster. Is that correct? (I'd like to avoid using a windows cluster fo this).
The second design alternative is to let the facade WCF service write the queue to a database. Then have two or more Windows services to do the processing of the queue. I don't have any questions on this alternative. If you wonder why I don't pick this one as it seems simpler to me then it is because I'd like to build this not introducing any windows services to the solution, that I beleive the MSMQ got functionality I don't want to code myself and I'm also curious about using MSMQ as I've never used it before.
Best Regards
Håkan
OK, so you're not using WCF with MSMQ integration, you're using WCF to create MSMQ messages as an end-product. That simplifies things to "how do I load balance MSMQ?"
The arrangement you use is based on what works best for you.
You could have multiple webservers sending messages to a remote queue on a central machine.
Instead you could have a webservers putting messages in local queues with a central machine polling the queues for new arrivals.
You don't need to cluster MSMQ to make it resilient. You can instead make your code resilient so that it copes with lost messages using dead letter queues, transactional queues, journaling, and so on. Hardware clustering is the easy option :-)
Load-balancing MSMQ - a brief
discussion
Oil and water - MSMQ transactional
messages and load balancing
After reading some more on the subjet I haver decided to not use MSMQ. It seems like I really got no reason to go down this road. I need this to be non-transactional and as I understand it none of the journaling or dead letter techniques will help me with my redundancy requirement.
All my components will be online most of the time (maybe a couple of hours per year when they got access problems).
The MSQM will only add complexity to the exciting solution, another technique and maybe another server to keep track of.
To get full redundance to prevent data loss in MSMQ I will need a windows cluster or implement send/recieve to multiple identical queues. I don't want to do either of those.
All this lead me to front my recieving application with a WCF facade accepting http calls writing to a database queue. This database is already protected from data loss. The queue will be polled by muliple active instances of a Windows Servce containing all the heavy business logic. With low process priority these services could be hosted on the already existing nodes used by the load balaced web application. If I got time to use MSMQ or if I needed it for another reason in my application I might change my decision.

API Model for Server Push Technologies (COMET)

I'm whiling to add a support of Server Side events to CppCMS. I understand the technical part of what to do on the
level of communication: Client sends long polling XmlHTTPRequest, and waits for respond,
server accepts the connection and does not respond until server side event occurs and sends
the response to the client. The client repeats the procedure.
However, this is too "low" level for most of web developers. There are many questions: how do I manage events, how do I manage connections and so on.
I thought about two possible models:
There are some named events defined and the server side,
for example "New Message in Chat Room no 134";
when the request accepted the server side application checks the messages
in the room (for example in DB) and if there is no new messages for the client
it subscribes to event and waits on it.
When some other client posts data to the server, it notifies all applications on
the "New Message in Chat Room no 134" event and they wake up and send these messages
to clients, and so on.
This model is still looks like quite "low level" model, but it hides all
notification methods.
Another option is to define some named queues, so each client creates such
queue upon connection to server and waits for new messages. When some client
posts a new message to "Chat Room no 134", on the server side it is broadcasted
to all queues connected to this "Chat Room no 134", and the message is delivered
to client.
However there are many questions that are risen:
How do I manage queues and the session level, at the level of single page?
How do I delete queues and create timeouts on them?
What happens if more then one "window" subscribes to same queue?
Create a persistent object on server side that glues between server side events
and user side events. It may communicate over distinct XHR requests that are
redirected to it.
So client (JavaScript) registers events and waits for them with XHR
and server side dispatched event notifications, until the page is rebuild.
So, I would like to know, what are most popular and recommended
API models behind server side push technologies?
Thanks
Edit: Added third option
You should check out XMPP PubSub, which defines a generic publish/subscribe protocol over XMPP. There's also an XMPP extension called BOSH (lower-level protocol details are documented separately in XEP-0124) that defines a mechanism that allows HTTP clients to bind to XMPP servers using long-polling (i.e., comet). Combining these two specifications gives you a robust event subscription model for web-apps using comet. Even if you don't end up using XMPP/BOSH, the specs contain some valuable insight into how this sort of system can be built.
If you do end up using XMPP and BOSH here are some tools you may find useful:
StropheJS: A library for writing client-side XMPP clients that speak BOSH.
Idavoll: A generic publish-subscribe service component for XMPP servers.
Punjab: A BOSH connection manager that acts as a sort of "translating proxy" between BOSH HTTP clients and your XMPP server.
Admittedly this is a very heavy-weight solution, and it may not be appropriate for your particular application, but a lot of thought was put into these standards so they may be helpful.
Try Bayeux, it's very much like your first model. The client subscribe to channel "chatroom/new-message/134". If there are new message, the server will broadcast to the subscribers.
You can use wildcard channel name to subscribe to multiple rooms "chatroom/new-message/*" (trailing only)
There's no general solution that fits all applications. If you want to learn about some general patterns, have a look at Event-Driven Architectures.
There are some slides online from a presentation I attended once (it's a quite high-level view of the topic).