How do you name API paths of the same microservice but w/ different consumers - api

Context
Let's imagine a simple microservices architecture (e.g. 2-3 microservices). Microservices are domain-based, API gateway in place and everything is how it should be. At the same time, microservices APIs are consumed by public mobile applications, admin UI, and other services for S2S communication, hence, we have three possible APIs consumers. Depends on the consumer, response DTOs are different but the business process might be the same (e.g. response for GET /users endpoint has different DTOs for a consumer application and admin UI but technically the data is taken from the same DB).
Question
How do you segment APIs in that case? Do you use namespaces like external, internal and etc?
Also, feel free to share your experience of how you segment APIs.
Thanks in advance!

From my point of view, the APIs should be different depending on the type of consumer that is going to use them.
For example, talking about your use case, It couldn't be the same API one that is intended to provide simple user information that the one used by an administrator. You should define two different APIs in this case, with different paths like internal/users/ and external/users as you said, and internally these two endpoints can use the same logic.
This separation is not only good in order to return different dtos in each endpoint but also to define different security (authentication/authorization) mechanisms for each API because I suppose that these requirements will be different for an admin API that for a general user one

It depends a bit on the philosophy you want to adopt.
The one suggested by #JArgente is good, in that you'd get good separation, and the role of each is (or at least should be) very clear.
The other approach is layering, which (for the OO programmers out there) is a bit like developing overloads for a method. It assumes that the data required by the derived API's is provided by the base API. So:
Develop a base API that provides all the data this API family needs to provide. This API might be the one that internal users use (e.g. Admin User), and it could require authentication.
Develop a public facing API that consumes the base API. This one would be your public-facing one.
Each API has a separate API Spec; depending in how you do this you can leverage inheritance at the Spec level.
Each API also has an actual endpoint which triggers some sort of processing - e.g. logic within the API Gateway itself, or logic handled within a downstream component like a microservice.
The public-facing one can be anonymous, as long as something (e.g. the API Gateway) can make an authenticated call against the base API, using some kind of 'service account'.
The advantage here is that you still get good separation between different API's and their consumers, but you also get the advantages of inheritance, so that code duplication is reduced (testing effort isn't so diffuse, etc).
This approach also allows you to run the endpoints on the same API Gateway, or deployed on separate ones (internal vs external).

Related

ABAC with Monorepo Microservices: What is the best approach?

At my work, I have a task to search and find solutions to implement the ABAC authorization in our microservices organized in a monorepo. We have some products and we use the concept of realms to organize the different client's data in the same database. Here our requirements are likely:
An user, which is a manager of his company, can only see data from your company and from your employees.
The same company can have N places, where each can have a manager. The manager of each place can only see the data from there.
First I thought to build some code to be used in every router of every API to verify the authorization and allow or deny the request. Something like this:
The other thing I thought was to create an API instead of a lib.
So, based on this question, I discovered that ABAC can be externalized from the apps (APIs) and make a lot of sense to me, see the image below.
But then I have some questions.
Is bad to do what I thought in the first image or in the second?
How the PDP will know what the user wants to do? Based on the route he is calling? But with this approach, the single responsibility will be hurt as the PDP needs to internalize (step 2) what other apps do, right?
The PIP needs to call the database for the PDP validates the authorization. So this can be slow as the same query will be done 2x, one for checking the policy and the other inside the service with business logic.
The modern way of doing this is by decoupling your policy and code - i.e. having a seperate microservice for Authorization - here's a part in a talk I gave at OWASP DevSlop about it.
You'd want you code in the middleware to be as simple as possible - basically just querying the Authorization microservice. That service basically becomes your PDP (in XACML terms).  This is true for both monolith and microservices (the assumption is you'll end up having more microservices next to your monolith anyhow). 
To implement the Authorization microservice / PDP you can use something like OPA (OpenPolicyAgent.org) and then use OPAL as a PAP and manager for PIPs. (Full disclosure I'm a contributor to both OPA and OPAL)
The query to the PDP should include what the user is doing (but not what the rules are). You can do this based on the Route (common when doing service-mesh), but often it's better to define a resource/action layout which becomes part of the query and is independent directly of the application route. Take a look at the Permit.io Policy-Editor which does exactly that kind of mapping. (Permit also uses both OPA and OPAL internally; Full disclosure I'm one of the founders of Permit.io )
Very good point. You don't want your PDP to constantly be querying other sources per incoming query to it (though its okay if you do it for a few edge cases) - What you want is to load data gradually in the background in an asynchronous fashion. Ideally in an event-driven fashion (i.e. have events propagate in realtime from the data sources into the PDP). This is exactly what you can do with OPAL.

SPA with Backend API and new B2B API - how to deploy

I have currently delivered a SPA (Vue.js) web application with a Java API backend. Everything is currently sitting in AWS, with the frontend being in CloudFront and the backend in ECS connecting to a RDS instance.
As part of the next phase of delivery, we are creating a B2B API. My question is that of architectural design and deployment strategy, is it commonplace to just extend the existing API with B2B functionality? Should I keep them both separate with an API gateway in front? We envisage that the B2B use eventually will outgrow the SPA use case so the initial deployment configuration needs to have the most flexibility to grow and expand.
Is there some sort of best practice here? I imagine that a lot of code would be similar between the two backends as well.
Thanks,
Terry
First off - Deciding on service boundaries is one of the most difficult problems in a service oriented architecture design and the answer strongly depends on your exact domain requirements.
Usually I would split service implementations by the domain/function as well as by organizational concerns (e.g. separate teams developing them) and not by their target audience. This usually avoids awkward situation where team responsibility is not clear, etc. If it will grow into a very large project there may also be a need for multiple layers of services and shared libraries - And at that point you would likely run into necessary re-factorings / restructurings.
So if there is a very large overlap in function between your b2b and the regular api you may not want to split the implementation.
However, you may also have to consider how the service access is provided and an API Gateway could help with providing different endpoints for the different audiences, different charging models, different auth options, etc. Depending on your exact requirements an API Gateway may not be enough and this may also require another thin service layer implementation that uses common domain services.

Mule API Led Connectivity Design Approaches for Experience API

As part of our journey towards API-led Connectivity, we have to group our resources (i.e. API endpoints) into multiple Mule applications for the experience APIs.
In order to have meaningful names for the Mule applications while maintaining the maximum re-usability, rather than associating the consumer names with the application names (which makes the experience API tightly coupled with the current application landscape), we propose to have Mule application names to reflect the essence of the business.
The list of the options are as follows. Which one do you think is more ideal? What approach have you used in your organization?
based on Channel/Consumer
A dedicated experience API for a consumer such as WEB, CRM, Mobile etc.
uri examples:
www.example.com/example-**web**-application/v1/
www.example.com/example-**crm**-application/v1/
www.example.com/example-**mobile**-application/v1/
Pro's: - applying channel specific policies is easier, management becomes easier, smaller outage window
Con's: - reusability reduces and chances of duplication of objects across api's increases
based on Business Domain
Company data model is used. Eg - Customer, Product, Payment etc.
uri examples:
www.example.com/example-**customers**-application/v1/
www.example.com/example-**products**-application/v1/
www.example.com/example-**payments**-application/v1/
Pro's: - promotes reusability, channel agnostic, same api can be used across different consumers.
Con's: management might get complex, larger outage window, multiple consumers might be impacted
based on Customer Journey
This approach is tied to the customer's lifecycle with the organization. Eg - Prospective Customer --> Lead --> Engage --> Payments --> Customer Retain
uri examples:
www.example.com/example-**prospect**-application/v1/
www.example.com/example-**lead**-application/v1/
www.example.com/example-**engage**-application/v1/
Pro's: channel agnostic, same api can be used across different consumers.
Con's: can get increasingly big and further breakdown might still be required
Thanks.
As far I understand your question; you would like to know what URIs to be using for the endpoints of the experience APIs, right?
Based on a recent blog entry from mulesoft (July 12 2017).
Experience APIs are:
Experience APIs are the means by which data can be
reconfigured so that it is most easily consumed by its intended
audience, all from a common data source, rather than setting up
separate point-to-point integrations for each channel. An Experience
API is usually created with API-first design principles where the API
is designed for the specific user experience in mind.
Based on the examples from MuleSoft and my understanding, the experience APIs are created for one given "experience"; web, virtual reality, mobile, etc...
You are trying to create an API for a given special experience to make the consumption of the API easy for this specific client.
According to my understanding the main goal on this level is not the re-usability. You focus on re-usability on the System API and Process API level, but the Experience APIs are supposed to make the life of the developers of the different clients easier by providing exactly the interface and data they need so they don't have to communicate directly with the system and process APIs, but they get a tailor-made API, suiting exactly their special needs.
Since the experience API is tailor-made for the special experience / channel / client-application; I think respresenting this in the URI is a good idea.

What is the difference between System API and Process Api

Kindly, can anyone differentiate between System api and Process api?
Please provide answer in Generic terms, as i am unable to find on internet.
A system api abstracts from an existing system. It talks to the system in the language of the system (e.g. SOAP, direct Java calls, SAP calls, etc.). To the outside world it offers a clean API (usually REST with http and json). When you do a good job implementing your system api, you can exchange your existing system with a different/new one without changing the api of your system api to the outside world: Just implement a new system api with different adapter logic.
A process api should talk REST on "both ends". It calls one or several system apis to do its job. The process api orchestrates different jobs.
When you need more information, do a search with "api led connectivity"
A System API is a layer you build on top of a system, which handles all system specific connection quirks and settings. It then exposes these resources and it's logic in a standard format (usualy REST but you're free to choose something else like SOAP) to the rest of your API's. Like Roger Butenuth states:
"When you do a good job implementing your system api, you can exchange
your existing system with a different/new one without changing the api
of your system api to the outside world: Just implement a new system
api with different adapter logic."
A process API is where you keep your logic and orchestration, it does not 'talk' to end systems directly but instead connects to system API's to get it's data.
A process API should idealy only talk REST on both sides and can aggegrate data from multiple systems.
An example of a complex process API would be an "items you've ordered" API which takes in a user id as it's input, then talks to the system API of a CRM system to get the id used by the "order history system API".
However this API might only return a list of orders without any article information besides an article id. So our Process API then enriches this list with Article information fetched from "article information system API" with the id from the list.
I know it's out of the scope of the question, but for the sake of completeness i'l shortly explain the third variant as well:
An Experience API can be seen as a doorway into your API network, every (type of) client has different information needs and can communicate in different protocols.
It is the Experience API's responsibility to provide ALL the information required by a client in a format they support.
This takes the responsibility away from a client to know where the information needs to be fetched from.
(Customer info from CRM, Order info from proprietary sys one, Article info from article DB)
This concept of design has as a bonus that if when for example, the mobile app your company is making, gets some new functionality which requires extra data.
You can update the "mobille app experience api", which would leave your "superexpensive IBM Experience api" unchanged. Cutting down on development costs as you don't need to implement any changes in your other api consumers which would be the case had you had only one api.
I think the main difference is where you are implementing business process and rules/logics.
System API, within the scope of your design, are atomic APIs which will be used to construct higher level API (experience APIs). Process API is the orchestration layer where you can use Mulesoft flows to implement business process or logic.
System APIs do the heavy lifting work of CRUD operations.
Process APIs focus on business logic
System API's are underlying all IT designs are center frameworks of records that are regularly not promptly accessible because of its many-sided quality and network concerns. APIs give a method for concealing that many-sided quality from the client while uncovering information and giving downstream protection from any interface changes or legitimization of those frameworks.
Process APIs exemplify the fundamental business forms that cooperate with source and target frameworks or channels through an arrangement of framework APIs. For instance, in a buy arrange process, there is some rationale that is regular crosswise over items, geologies and retail channels that can and ought to be refined into a solitary administration that would then be able to be called.
And you will get some more clarity from this article https://dzone.com/articles/api-the-backbone-of-the-software-industry-know-how
System API and Process API will be part of API led connectivity.
System API is like awrapper service to either a main data base or saas platform
Process API involves application logic to validate search or query parameters

Cumulocity extend API

We're working with Cumulocity and we'd like to offer services to our customers that are not currently possible to implement with Cumulocity. As an example, we'd like to be able to retrieve a list of devices located within x kilometers of a given point.
Currently there are two limitations that prevent us from doing so:
the impossibility of extending the Cumulocity API with custom route/parameters
the impossibility of implementing custom functions for specific API GET calls
I can think of a workaround to achieve this, like a POST request of an event that would be processed by an Esper rule, generating another event/measurement that could then be accessed by a GET. But I think we can agree this is not a suitable mechanism.
Please not that the use case I described above is just an example. Our needs don't limit to this and we need a standardized way to expand our services without requirering updates on Cumulocity side.
There are two topics here, I believe:
Geo-querying: Some geographical querying and aggregation use cases can be handled through CEL. A general geo-querying API is on the Cumulocity roadmap. Note: This use case is not only related to extending the API, as such queries go right down into the database.
Extending the API: That is actually possible. Cumulocity has a microservices API in which you can expose other APIs under the URL /services/.... This is, for example, how connectivity platforms are interfaced. The API is not on the web site because it's not GA yet, but you can certainly discuss it with your Cumulocity contact or open a ticket. This btw includes also adding permissions for the new microservices, so that you can do proper A&A.