How to handle secured API in service to service communication - api

I have a working monolith application (deployed in a container), for which I want to add notifications feature as a separate microservice.
I'm planning for the monolith to emit events to a message bus (RabbitMQ) where they will be received by the new service, which will send the notification to user. In order to compose a notification, it will need other information about the user from the monolit, so it will call monolith's REST API in order to obtain it.
The problem is, that access to the monolith's API requires authentication in form of a token. I was thinking of:
using the secret from the monolith to issue a never-expiring token - I don't think this is a great idea from the security perspective, and also I know that sometimes the keys rotate in which case the token would became invalid eventually anyway
using the message bus to retrieve the information - this does not seem a good idea either as the asynchrony would make it very complicated
providing all the info the notification service needs in the event - this would make them more coupled together, and moreover, I plan to also send notifications based on the state on the monolith not triggered by an event
removing the authentication from the monolith and implementing it differently (not sure how yet)
My question is, what are some of the good ways this kind of problem can be solved, and also, having just started learning about microservices, is what I am trying to do right in the first place?

When dealing with internal security you should always consider the deployment and how the APIs are exposed to the outside world, an API gateway might be used to simply make it impossible to access internal APIs. In that case, a fixed token might be good enough to ensure that the client is authorized.
In general, though, I would suggest looking into OAuth2 or a JWT-based solution as it helps to validate the identities of the calling system as well as their access grants.
As for your architecture doubts, you need to consider the following scenarios when building out the solution:
The remote call can fail, at any time for unknown reasons, as such you shouldn't acknowledge the notification event until you're certain that the notification has been processed successfully.
As you've mentioned RabbitMQ, you should aim to keep the notification queue as small as possible, to that effect, a cache that contains the user details might help speed things along (and help you reduce the chance of failure due to the external system not being available).
If your application sends a lot of notifications to potentially millions of different users, you could consider having a read-only database replica of the users which is accessible to the notification service, and directly read from the database cluster in batches. This reduces the load on the monolith and shift it to the database layer

Related

Angular 5 and Message Bus

I have a set of RESTful services that my Angular 5 client uses to perform CRUD and business operations for the application. These are a set of micro services and they use pub/sub message queues to communicate between them, e.g. when a user is created the user server publishes a UserCreated event to the message queue and subscribers can listen for this event and act upon it as required.
Now, this is all good but i was thinking that wouldn’t it be better if the Angular 5 application itself published the event to the message queue rather than making HTTP POST/PUT or DELETE and only make GET requests against the API?
So repeating the example above the Angular 5 client would publish a CreateUserEvent to the message bus (in my case cloud pub/sub), I could then have services subscribe to these events and act upon them. My RESTful services would then only expose GET /users and GET /user/:id for example.
I know that this is doable and I guess what I am describing is CQRS, but I am keen to understand if publishing events to a message bus from the UI is good practice?
The concept of Messaging Bus is very different than Microservices. Probably, the answer to your question lies in the way you look at these two, from architectural perspective.
A messaging bus(whether it is backend specific or frontend specific) is designed in such a way, that it serves the purpose of communication of entities within the confined boundary of an environment, i.e. backend or frontend.
Whereas on the other hand, microservices architecture is designed in such a way that, two different environments that may be backend-frontend or backend-backend, can "effectively" communicate.
So there is a clear separation of motivation behind both the concepts. Now, from your viewpoint, you may use a hybrid approach which might work, and it may also lead to interesting findings related to performance, architectural design or overheads as well.
Publishing directly from the client is possible, but the caveat is that it means that the client needs to have the proper credentials to publish. For this reason, it may be preferable to have the service do the publishing in response to requests sent from the clients.

How to connect separate microservice applications?

I am building huge application using microservices architecture. The application will consist of multiple backend microservices (deployed on multiple cloud instances), some of which I would like to connect using rest apis in order to pass data between them.
The application will also expose public api for third parties, but the above mentioned endpoints should be restricted ONLY to other microservices within the same application creating some kind of a private network.
So, my question is:
How to achieve that restricted api access to other microservices within the same application?
If there are better ways to connect microservices than using http transport layer, please mention them.
Please keep the answers server/language agnostic if possible.
Thanks.
Yeah easy. Each client of a micro service has an API key. Micro services only accept requests from clients with a valid API Key.
Also, its good to know that REST is simply a protocol that allows communication between bounded contexts.
It doesn't have to be over HTTP. The requirement is that it has a uniform interface (this is why HTTP is used with its PUT, POST, GET, DELETE... methods) and that it is stateless (all state being transferred through a URI).
So if all your micro services run on the same box, all you need to do is something like this:
class SomeClass implements RestfulMethods {
public function get(params){ // return something}
public function post(params){ // add something}
public function put(params){ // update something}
public function delete(params){ // delete something}
}
Micro services then communicated by interacting with the RestfulMethod implementations of other services.
But if your micorservices are on different machines, its probably best to use HTTP as the transport mechanism.
One way is to use HTTPS for internal MS communication. Lock down the access (using a trust store) to only your services. You can share a certificate among the services for backend communication. Preferably a wildcard certificate. Then it should work as long as your services can be adressed to the same domain. Like *.yourcompany.com.
Once you have it all in place, it should work fine. HTTPS sessions does imply some overhead, but that's primarily in the handshake process. Using keep-alive on your sessions, there shouldn't be much overhead with encrypted channels.
Of course, you can simply add some credentials to your http headers as well. That would be less secure.
RestAPI is not only way to do it, one of the some ideas that i have seeming is about the usage of Service Registry link Eureka (Netflix), Zookeeper (Apache) and others.
Here is an example:
https://github.com/tiarebalbi/qcon2015-sao-paolo-microservices-workshop
...the above mentioned endpoints should be restricted ONLY to other
microservices within the same application...
What you are talking about in a broad sense is authorisation.
Authorisation is the granting or denying of "powers" or "abilities" within your application to authentic users.
Therefore the job of any authorisation mechanism is to validate the "claim" implicit in any inbound API request - that the user is allowed to do the thing encoded in the request.
As an example, imagine I turned up at your API with a PUT request for Widget 1234:
PUT /widgetservice/widget/1234 HTTP/1.1
This could be interpreted as me (Bob Smith, a known user) making a claim that I am allowed to make changes to a widget in your system with id 1234.
Whatever you do to validate this claim, I hope you can see this needs to be done at the application level, rather than at the API level. In fact, authorisation is an application-level concern, rather than an API-level concern (unlike authentication, which is very much an API level concern).
To demonstrate, in our example above, it's theoritically possible I'm allowed to create a new widget, but not to update an existing widget:
POST /widgetservice/widget/1234 HTTP/1.1
Or even I'm allowed to update only widget 1234 and requests to change other widgets should not be allowed
PUT /widgetservice/widget/5678 HTTP/1.1
How to achieve that restricted api access to other microservices
within the same application?
So this becomes a question about how can you build authorisation into your application so that you can validate individual requests coming from known users (in your case your other services in your ecosystem are just another kind of known user).
Well, and apologies but I'm going to be prescriptive here, you could use a claims-based authorisation service, which stores valid claims based on user identity or membership of roles.
It depends largely on how you are handling authentication, and whether or not you are supporting roles as part of that process. You could store claims against individual users but this becomes arduous as the number of users increases. OAuth, despite being pretty heavy to implement, is a leading platform for this.
I am building huge application using microservices architecture
The only thing I will say here is read this first.
The easiest way is to only enable access from the IP address that your microservices are running on.
I know i'm super late for this question :)) but for anyone who came across this thread, Kafka is a great option for operations similar to this question.
based on Kafka's own introduction
Kafka is generally used for two broad classes of applications:
Building real-time streaming data pipelines that reliably get data between systems or applications
Building real-time streaming applications that transform or react to the streams of data
Side Note: Kafka is created by LinkedIn and is being used in many huge companies so it's kindda battle tested.
you can use RabbitMQ
publish your resquests to queue and then consume tasks

Event notification on new e-mail in IBM Domino

Is it possible to subscribe to mail events on an IBM Domino server?
I need a service similar to the one provided by Microsoft Exchange Event Notification, where you can subscribe to events and get notified when there are changes - eg. arrival of a new e-mail. I need the solution to be server side, since I can't rely on users having their client running.
Unfortunately, as per my comment above, there is no pre-packaged equivalent to the push, pull and streaming subscription services that EWS supports. A Notes client can get notifications via Notes RPC protocol, and there's also obviously some technology in IBM's Notes Traveler mobile product, but nothing that I'm aware of as a pre-packed web service or even as a notifications API. You would have to build it. There are a variety of ways you could go about it.
For push or streaming subscriptions, one way would be with a Notes C API plugin using the Extension Manager, running on the server and monitoring the mailboxes. You might be able to use a DSAPI plugin into Domino's HTTP stack to manage the incoming connections and feed the data out to subscribers, but honestly I have no idea if Domino's HTTP stack can handle the persistent connections that are implied in the subscription model. Alternatively, the Extension Manager plugin could quickly send the data over to code written in any other language that you want, running on any web stack that. Of course, you'll have to deal with security through all the linked-together parts.
For pull subscriptions, I guess it's really more of a polling archiecture, with state saved somewhere so that only changes since the last call will be delivered. You have any number of options for that. You could use Domino's built-in HTTP server, obviously, so you could write your own Domino-hosted web service for this. You could also use the Domino Data Service, which is a REST API, to do this -- with all necessary state information being stored on the client-side. (On quick look, I don't see a good option for getting all new docs since a specified date-time via Domino Data Service, but it might be possible.)
I do worry a bit about scalability of any custom solution for this. My understanding is that Microsoft has quite a bit of caching and optimization in their services in order to address scale. Obviously, you can build whatever you need for that into your own web service, but it will likely add a lot of effort.

NServiceBus Sagas and REST API Integration best-practices

What is the most sensible approach to integrate/interact NServiceBus Sagas with REST APIs?
The scenario is as follows,
We have a load balanced REST API. Depending on the load we can add more nodes.
REST API is a wrapper around a DomainServices API. This means the API can be consumed directly.
We would like to use Sagas for workflow and implement NServiceBus Distributor to scale-out.
Question is, if we use the REST API from Sagas, the actual processing happens in the API farm. This in a way defeats the purpose of implementing distributor pattern.
On the other hand, using DomainServives API directly from Sagas, allows processing locally within worker nodes. With this approach we will have to maintain API assemblies in multiple locations but the throughput could be higher.
I am trying to understand the best approach. Personally, I’d prefer to consume the API (if readily available) but this could introduce chattiness to the system and could take longer to complete as compared to to in-process.
A typical sequence could be similar to publishing an online advertisement,
Advertiser submits a new advertisement request via a web application.
Web application invokes the relevant API endpoint and sends a command
message.
Command message initiates a new publish advertisement Saga
instance.
Saga sends a command to validate caller permissions (in
process/out of process API call)
Saga sends a command to validate the
advertisement data (in process/out of process API call)
Saga sends a
command to the fraud service (third party service)
Once the content and fraud verifications are successful,
Saga sends a command to the billing system.
Saga invokes an API call to save add details. (in
process/out of process API call)
And this goes on until the advertisement is expired, there are a number of retry and failure condition paths.
After a number of design iterations we came up with the following guidelines,
Treat REST API layer as the integration platform.
Assume API endpoints are capable of abstracting fairly complex micro work-flows. Micro work-flows are operations that executes in a single burst (not interruptible) and completes with-in a short time span (<1 second).
Assume API farm is capable of serving many concurrent requests and can be easily scaled-out.
Favor synchronous invocations over asynchronous message based invocations when the target operation is fairly straightforward.
When asynchronous processing is required use a single message handler and invoke API from the handlers. This will delegate work to the API farm. This will also eliminate the need for a distributor and extra hardware resources.
Avoid Saga’s unless if the business work-flow contains multiple transactions, compensation logic and resumes. Tests reveals Sagas do not perform well under load.
Avoid consuming DomainServices directly from a message handler. This till do the work locally and also introduces a deployment hassle by distributing business logic.
Happy to hear out thoughts.
You are right on with identifying that you will need Sagas to manage workflow. I'm willing to bet that your Domain hooks up to a common database. If that is true then it will be faster to use your Domain directly and remove the serialization/network overhead. You will also lose the ability to easily manage the transactions at the database level.
Assuming your are directly calling your Domain, the performance becomes a question of how the Domain performs. You may take steps to optimize the database, drive down distributed transaction costs, sharding the data, etc. You may end up using the Distributor to have multiple Saga processing nodes, but it sounds like you have some more testing to do once a design is chosen.
Generically speaking, we use REST APIs to model the commands as resources(via POST) to allow interaction with NSB from clients who don't have direct access to messaging. This is a potential solution to get things onto NSB from your web app.

How a WCF request can be correlated with multiple Workflow instances?

The scenario is a follow:
I have multiple clients in which they can register themselves on a workflow server, using WCF requests, to receive some kind of notifications. The information of the notifications will be received from an external system using another receive activity. The workflow then should get the notification information and callback all registered clients using send activity and callback correlations (the clients are exposing callback interfaces implemented in there and the end-point addresses passed initially with the registration requests). "Log-running workflow service" approach is used with a persistent storage.
Now, I'm looking for some way to correlate the incoming information of the notifications received from the external system with the persisted workflow instances created previously when the registration requests, so that all clients will be notified using end-points that already passed with the registration requests. Is WF 4.0 capable of resuming and executing multiple workflow instances when the information of the notification received without storing end-points somehow manually and go though them? If yes, how can I do that?
Also, if my approach of doing so is not correct, then please advice me about the best practice of doing such system using WCF services.
Your help is highly appreciated.
When you use request correlation with workflow services the correlation key must always match a single workflow instance, you can't have multiple workflow instances react to a single message. So you either need to multicast the message using all the different correlation keys or resume you workflow instances in some other way. That other way could be to store the request somewhere, like a SQL table, and have the workflows periodically check that location if they need to notify the client.