WebApi in Net Core using RabbitMQ - asp.net-core

I am very new on RabbitMQ World and Microservices Architecture. I've watched some tutorials on youtube how to use rabbitMQ but there are some questions I would like to ask.
1.) If the client post the data to my api controller and my api controller publish the data into queue, what is the proper way to respond the post request while the data is being processed on rabbitMQ? Because from the tutorial, It just returns "Ok (Http Code 200)" even though the data is not still completed yet.
2.) Can consumer subscribe more than 1 queue? If yes, is there any configuration sample on startup.cs?
3.) Is there any sample using rabbitMQ on .NetCore for "Real World" cases? Please let me know.
Thanks

It's perfectly okey to return 200. That's just one of many tradeoffs for microservices architecture. E.g. from the performance perspective, it's efficient that you can return early with 200 and propagate all the changes asynchronously through the rest of the distributed system. On the other hand, it adds another type of complexity that you need to embrace - Eventual Consistency. This concept kind of describes what you asked about. Let say your client received 200, but if it immediately calls another microservice the client may not see changes introduced by the previous request, because there is a probability that the changes haven't propagated yet. You need to decide whether it's acceptable in your project or not. If not maybe you should redesign how you split your business domain into microservices, trying to group transactionally close to each other entities together in order to mitigate such problems. If you can't really tolerate Eventual Consistency maybe you should give up on microservices for the particular project.
Yes it can, you could for example create an implementation of IHostedService for each queue listening for messages and run them in parallel with your asp net core app by registering them in the starutp.cs
You'll find this in the repository from the below's links. They use RabbitMQ. Although, there's a bit of abstraction which can make it harder to grasp, it's a great implementation with a bonus of being documented in the free ebook.
https://github.com/dotnet-architecture/eShopOnContainers/ - I can't stress enough how this repository helped me with understanding microservices. There's also a free ebook from Microsoft docs about this repo: https://learn.microsoft.com/en-us/dotnet/architecture/microservices/ . They tackle concepts such as Eventual Consistency and asynchronous communication. It's exactly what you look for.

Related

Micro-service architecture in .NET Core: pattern or library for services to call each other

I am implementing a micro-service architecture for the first time.
Some of my services (.NET Core Web APIs) need to communicate with each other through HTTP requests. For that purpose, I am injecting a wrapper around HttpClient.
But I suspect that I am reinventing the wheel. Among micro-service practitioners, is there a pattern or even a third-party library to solve this problem?
In a micro-service architecture, the most important thing is a clear separation of concerns and application boundaries. Imagine a simple setup, with Product and Price micro services
An important concept is each service is master of data, and owns its own database. In this example,
a client of the 'Product' service will make an HTTP call to the Product API.
the product API will make a call to the Price API to get prices for the products
the product API therefore depends on the Price API to create a response
These are the synchronous parts of the process, generally achieved through HTTP calls across boundaries. You'll also have asynchronous parts of your solution, in this example,
the Price API publishes an event to a bus whenever a price is changed
the product API publishes an event whenever a product is created
There may be one or more subscribers to these events, that will respond and probably call an API to retrieve the changed data.
The critical parts of this are clearly defining your API and message contracts, understanding if things will be async or sync, having the right level of telemetry across the entire architecture to track and understand distributed system behaviour, and keeping everything as independently buildable/testable/deployable components.
First and foremost, if you're not using containers, start, along with orchestration (both natively supported in Visual Studio, assuming you have Docker, etc. actually installed). Among the many benefits, you can reference your services via hostname without having to worry about ports and different locations for different environments.
As far as actual communication goes. There's not really a magic solution here. HttpClient is what you use, of course, and generally, yes, you want to have a wrapper around that to abstract away the low-level HTTP communication stuff, so the rest of your code can simply call simple methods on that wrapper.
If you aren't using IHttpClientFactory, start. If you already have a wrapper class, you're halfway there, and with that, not only do you get efficient management of HttpMessageHandlers so you don't exhaust your server's connection pool, but you can also use the Polly integration to handle transient HTTP errors and even do retry policies, circuit breakers, etc. for your microservice connections.
Finally, there is the Refit library which can make things a tad more straight-forward. I find it to have more use with huge third-party APIs like Facebook, Google, etc., though. Since microservices should by design be simple, you're probably not saving much code over just having your own wrapper class. Regardless, the way it works is that you define an interface that represents the API, and then Refit uses that to actually make appropriate requests. It's kind of like a wrapper class for free, but you still need to create the interface.

Handling user request in Microservice Architecture powered by Messaging inter-communication (f.e. RabbitMQ)

I'm just starting with Microservice Architecture and investigating how to build that on top of messaging bus.
There is one concern which bothers me right now - how do I handle a simple query-like request from user or when microservice needs some data from other microservice to serve a response? (f.e. getOrderList, or getUserNameById)
I know there is a RPC pattern in RabbitMQ, but everybody is strongly recommending to avoid that (as it brings temporal coupling) and use async communication instead.
Yes, you have to use async communication in order to make sure that services are temporally decoupled. Here is a good series of articles that explains reasoning behind that design decision in-depth.
Also, consider reading about CQRS/ES approach to design microservices, it was an eye-opener for me when I first discovered it.

Real-time application newbie - Node.JS + Redis or RabbitMQ -> client/server how?

I am a newbie to real-time application development and am trying to wrap my head around the myriad options out there. I have read as many blog posts, notes and essays out there that people have been kind enough to share. Yet, a simple problem seems unanswered in my tiny brain. I thought a number of other people might have the same issues, so I might as well sign up and post here on SO. Here goes:
I am building a tiny real-time app which is asynchronous chat + another fun feature. I boiled my choices down to the following two options:
LAMP + RabbitMQ
Node.JS + Redis + Pub-Sub
I believe that I get the basics to start learning and building this out. However, my (seriously n00b) questions are:
How do I communicate with the end-user -> Client to/from Server in both of those? Would that be simple Javascript long/infinite polling?
Of the two, which might more efficient to build out and manage from a single Slice (assuming 100 - 1,000 users)?
Should I just build everything out with jQuery in the 'old school' paradigm and then identify which stack might make more sense? Just so that I can get the product fleshed out as a prototype and then 'optimize' it. Or is writing in one over the other more than mere optimization? ( I feel so, but I am not 100% on this personally )
I hope this isn't a crazy question and won't get flamed right away. Would love some constructive feedback, love this community!
Thank you.
Architecturally, both of your choices are the same as storing data in an Oracle database server for another application to retrieve.
Both the RabbitMQ and the Redis solution require your apps to connect to an intermediary server that handles the data communications. Redis is most like Oracle, because it can be used simply as a persistent database with a network API. But RabbitMQ is a little different because the MQ Broker is not really responsible for persisting data. If you configure it right and use the right options when publishing a message, then RabbitMQ will actually persist the data for you but you can't get the data out except as part of the normal message queueing process. In other words, RabbitMQ is for communicating messages and only offers persistence as a way of recovering from network problems or system crashes.
I would suggest using RabbitMQ and whatever programming languages you are already familiar with. Since the M in LAMP is usually interpreted as MySQL, this means that you would either not use MySQL at all, or only use it for long term storage of data, not for the realtime communications.
The RabbitMQ site has a huge amount of documentation about building apps with AMQP. I suggest that after you install RabbitMQ, you read through the docs for rabbitmqctl and then create a vhost to experiment in. That way it is easy to clean up your experiments without resetting everything. I also suggest using only topic exchanges because you can emulate the behavior of direct and fanout exchanges by using wildcards in the routing_key.
Remember, you only publish messages to exchanges, and you only receive messages from queues. The exchange is responsible for pattern matching the message's routing_key to the queue's binding_key to determine which queues should receive a copy of the message. It is worthwhile learning the whole AMQP model even if you only plan to send messages to one queue with the same name as the routing_key.
If you are building your client in the browser, and you want to build a prototype, then you should consider just using XHR today, and then move to something like Kamaloka-js which is a pure Javascript implementation of AMQP (the AMQ Protocol) which is the standard protocol used to communicate to a RabbitMQ message broker. In other words, build it with what you know today, and then speed it up later which something (AMQP) that has a long term future in your toolbox.
Should I just build everything out with jQuery in the 'old school' paradigm and then identify which stack might make more sense? Just so that I can get the product fleshed out as a prototype and then 'optimize' it. Or is writing in one over the other more than mere optimization? ( I feel so, but I am not 100% on this personally )
This is usually called RAD (rapid application design/development) and it is what I would recommend right now. This lets you build the proof of concept that you can use to work off of later to get what you want to happen.
As for how to talk to the clients from the server, and vice versa, have you read at all on websockets?
Given the choice between LAMP or event based programming, for what you're suggesting, I would tell you to go with the event based programming, so nodejs. But that's just one man's opinion.
Well,
LAMP - Apache create new process for every request. RabbitMQ can be useful with many features.
Node.js - Uses single process to handle all request asynchronously with help of event looping. So, no extra overhead process creation like apache.
For asynchronous chat application,
socket.io + Node.js + redis pub-sup is best stack.
I have already implemented real-time notification using above stack.

a completely decoupled OO system?

To make an OO system as decoupled as possible, I'm thinking of the following approach:
1) we run an RMI/directory like service where objects can register and discover each other. They talk to this service through an interface
2) we run a messaging service to which objects can publish messages, and register subscription callbacks. Again, this happens through interfaces
3) when object A wants to invoke a method on object B, it discovers the target object's unique identity through #1 above, and publishes a message on the message service for object B
4) message services invokes B's callback to give it the message
5) B processes the request and sends the response for A on message service
6) A's callback is called and it gets the response.
I feel this system is as decoupled as practically possible, but it has the following problems:
1) communication is typically asynchronous
2) hence it's non real time
3) the system as a whole is less efficient.
Are there any other practical problems where this design obviously won't be applicable ? What are your thoughts on this design in general ?
Books
Enterprise Integration Patterns
It appears he's talking about using a Message Oriented Middleware
Here are some things to consider
Security
What will prevent another rogue service from registering as a key component in your system. You will need way to validate and verify that services are who they say they are. This can be done through a PKI system. There are scenarios that you might not need to do this, if your system is hosted entirely on your intranet. IF that is the case Social Engineering and Rogue Employees will be your biggest threat.
Contract
What kind of contract will your clients have with the services? Will messages all be serialized as XML and sent as a TextMessage? If you use a pure byte message you'll have to be careful about byte order if your services are to run on multiple platforms.
Synchronization
Most developers are not able to comprehend and utilize asynchronous messages correctly. Where possible it might be in the best interest of your design to create a way to invoke "synchronous" messages. I've done this in the past by creating a sendMessageAndWait() method with a timeout and a return object. Within the method you can create a temporary topic id to receive the response, register a listener for it, then use locks to wait for a message to be returned on your temporary topic.
Unsolicited Messages
What happens if you want to allow your service(s) to send unsolicited messages to your clients? A critical event happened in Service A and it must notify your clients or possibly a Watch Dog service. Allow for your design to register for a common communication channel for services to communicate with clients without clients initiating the conversation.
Failover
What happens if a critical service processing your credit cards goes down? You'll need to implement a Failover and Watch Dog service to ensure that your key infrastructure is always up and running. You could register a list of services within your registry then your register could give out the primary service, falling back to a secondary service if your primary stops communicating. Or if your Message Oriented Middleware can handle Round Robin messaging you might be able to register all the services on the same topic. Think about creating a way to know when a service has died. Since most messages are Asynchronous it will be difficult to determine when a service has gone offline. This can be done with a Heartbeat and Watch Dog.
I've created this type of system a few times in my past for large systems that needed to communicate. If you and other developers understand the pros and cons of such a system it can be very powerful and flexible.
The biggest piece of advice I can give is to build a toolkit for your other developers so they don't have to think about how to register a service, or implement failover, or respond to messages from a client. These are the sorts of things that will kill your system and have others say it is too complicated. Making it painless for them will allow your system to work the way you need it with flexibility and decoupling while not burdening your developers with understanding enterprise design patterns.
This is not a Ivory Tower Architect/Architecture. It would be if he said, "This is how it will do done, now go do it and don't bother me about it because I know I'm right." If you really wanted to reference a Anti-Pattern it could be Kitchen Sink, maybe. Nah now that I think about it, it isn't Kitchen Sink either.
If you can find one please post it as a comment.
Anti-Patterns
Coupling is simply a balance between efficiency and re-usability. If you wish the modules of your system to be as reusable as possible then that will undoubtedly come at a cost.
Personally I think it best to define some key assumptions which may tighten coupling, but bring increased efficiency.
There are design patterns which never see the light of day just because the benefit they provide is not worth the cost in complexity.
What's the simplest thing that could possibly work? Do modularize into reasonable size routines, but avoid interfaces, services, messages and all of this unless you are going to have multiple implementations or multiple hardware resources to divide a job.
Make it simple, then refactor those parts that turned out to matter.

Concurrent WCF calls via shared channel

I have a web tier that forwards calls onto an application tier. The web tier uses a shared, cached channel to do so. The application tier services in question are stateless and have concurrency enabled.
But they are not being called concurrently.
If I alter the web tier to create a new channel on every call, then I do get concurrent calls onto the application tier. But I want to avoid that cost since it is functionally unnecessary for my scenario. I have no session state, and nor do I need to re-authenticate the caller each time. I understand that the creation of the channel factory is far more expensive than than the creation of the channels, but it is still a cost I'd like to avoid if possible.
I found this article on MSDN that states:
While channels and clients created by
the channels are thread-safe, they
might not support writing more than
one message to the wire concurrently.
If you are sending large messages,
particularly if streaming, the send
operation might block waiting for
another send to complete.
Firstly, I'm not sending large messages (just a lot of small ones since I'm doing load testing) but am still seeing the blocking behavior. Secondly, this is rather open-ended and unhelpful documentation. It says they "might not" support writing more than one message but doesn't explain the scenarios under which they would support concurrent messages.
Can anyone shed some light on this?
Addendum: I am also considering creating a pool of channels that the web server uses to fulfill requests. But again, I see no reason why my existing approach should block and I'd rather avoid the complexity if possible.
After much ado, this all came down to the fact that I wasn't calling Open explicitly on the channel before using it. Apparently an implicit Open can preclude concurrency in some scenarios.
You can cache the WCF proxy, but still create a channel for each service call - this will ensure concurrency, is not very expensive in comparison to creating a channel from scratch, and re-authentication for each call will not be necessary. This is explained on Wenlong Dong's blog - "Performance Improvement for WCF Client Proxy Creation in .NET 3.5 and Best Practices" (a much better source of WCF information and guidance than MSDN).
Just for completeness: Here is a blog entry explaining the observed behavior of request serialization when not opening the channel explicitly:
http://blogs.msdn.com/b/wenlong/archive/2007/10/26/best-practice-always-open-wcf-client-proxy-explicitly-when-it-is-shared.aspx