I have currently delivered a SPA (Vue.js) web application with a Java API backend. Everything is currently sitting in AWS, with the frontend being in CloudFront and the backend in ECS connecting to a RDS instance.
As part of the next phase of delivery, we are creating a B2B API. My question is that of architectural design and deployment strategy, is it commonplace to just extend the existing API with B2B functionality? Should I keep them both separate with an API gateway in front? We envisage that the B2B use eventually will outgrow the SPA use case so the initial deployment configuration needs to have the most flexibility to grow and expand.
Is there some sort of best practice here? I imagine that a lot of code would be similar between the two backends as well.
Thanks,
Terry
First off - Deciding on service boundaries is one of the most difficult problems in a service oriented architecture design and the answer strongly depends on your exact domain requirements.
Usually I would split service implementations by the domain/function as well as by organizational concerns (e.g. separate teams developing them) and not by their target audience. This usually avoids awkward situation where team responsibility is not clear, etc. If it will grow into a very large project there may also be a need for multiple layers of services and shared libraries - And at that point you would likely run into necessary re-factorings / restructurings.
So if there is a very large overlap in function between your b2b and the regular api you may not want to split the implementation.
However, you may also have to consider how the service access is provided and an API Gateway could help with providing different endpoints for the different audiences, different charging models, different auth options, etc. Depending on your exact requirements an API Gateway may not be enough and this may also require another thin service layer implementation that uses common domain services.
Related
Context
Let's imagine a simple microservices architecture (e.g. 2-3 microservices). Microservices are domain-based, API gateway in place and everything is how it should be. At the same time, microservices APIs are consumed by public mobile applications, admin UI, and other services for S2S communication, hence, we have three possible APIs consumers. Depends on the consumer, response DTOs are different but the business process might be the same (e.g. response for GET /users endpoint has different DTOs for a consumer application and admin UI but technically the data is taken from the same DB).
Question
How do you segment APIs in that case? Do you use namespaces like external, internal and etc?
Also, feel free to share your experience of how you segment APIs.
Thanks in advance!
From my point of view, the APIs should be different depending on the type of consumer that is going to use them.
For example, talking about your use case, It couldn't be the same API one that is intended to provide simple user information that the one used by an administrator. You should define two different APIs in this case, with different paths like internal/users/ and external/users as you said, and internally these two endpoints can use the same logic.
This separation is not only good in order to return different dtos in each endpoint but also to define different security (authentication/authorization) mechanisms for each API because I suppose that these requirements will be different for an admin API that for a general user one
It depends a bit on the philosophy you want to adopt.
The one suggested by #JArgente is good, in that you'd get good separation, and the role of each is (or at least should be) very clear.
The other approach is layering, which (for the OO programmers out there) is a bit like developing overloads for a method. It assumes that the data required by the derived API's is provided by the base API. So:
Develop a base API that provides all the data this API family needs to provide. This API might be the one that internal users use (e.g. Admin User), and it could require authentication.
Develop a public facing API that consumes the base API. This one would be your public-facing one.
Each API has a separate API Spec; depending in how you do this you can leverage inheritance at the Spec level.
Each API also has an actual endpoint which triggers some sort of processing - e.g. logic within the API Gateway itself, or logic handled within a downstream component like a microservice.
The public-facing one can be anonymous, as long as something (e.g. the API Gateway) can make an authenticated call against the base API, using some kind of 'service account'.
The advantage here is that you still get good separation between different API's and their consumers, but you also get the advantages of inheritance, so that code duplication is reduced (testing effort isn't so diffuse, etc).
This approach also allows you to run the endpoints on the same API Gateway, or deployed on separate ones (internal vs external).
We developed a monolithic API to be used as a SAAS.
In the company we also develop custom build for some customers.
Some of our customers are asking for some features that are already implemented in the monolithic application.
We are thinking about splitting our API into microservices, but our major questions are the following :
Does microservices can be reuse across different projects ?
If we do split, do we create a microservice that everybody use or do we create an instance per custom build ?
E.G :
project A use "User", "Project" so we deploy 2 microservices
project B use "User", "Project", "Store" so we deploy 3 microservices
total number of microservices deployed : 5
If we create an instance of each microservice per custom build, won't be too hard to manage the communication between all the microservices within the same custom build ?
Or do we stick with one instance per microservice that everybody use and we specify the project source ?
As we are using C# GraphQL.
We also thought about creating Nuget package for each component, so each package will contains :
Exposed GraphQL Queries / Mutations
His own db
His own logic
E.G :
- Api A install "User" & "Project" packages
- 3 db are instantiated "Api.A", "Api.A.User", "Api.A.Project"
- Api B install "User", "Project" & "Store" packages
- 4 db are instantiated "Api.B", "Api.B.User", "Api.B.Project" & "Api.B.Store"
But does it make sense to do that ?
In my mind it could very similar from Hangfire https://www.hangfire.io/
Note that we are currently using AWS Serverless to host our applications.
An important point is that we are a small team 2-4.
We are very open minded so any suggestion is good to hear.
Thank you !
First of all, I would like to say is that there is no right way here and I am providing my point of view from the way we have already done things hoping it will guide you in finding a solution best suited for your requirements.
So to understand your dilemma, you have a base vanilla product which is an API SAAS and there is a customized deployment for some customers as well. But as you are having to build custom deployments for each customer you are noticing a common pattern, wherein a lot of the functionality is repeated across the SAAS for each customer.
Now assuming I have the requirement correct, I would say micro-services will provide definite benefits in your case in terms of scaling and customer-specific customization which will be managed by independent teams.
But a lot of this depends on how your business logic is structured and how big and vast your customization is. Some of these questions should drive your solution are.
Can you store Customer-specific data in a central data store or at customers' end ? & How are your databases going to be structured and how many of them?
How big are the customizations ? are they cosmetic or workflow adhering?
How much cross-communication you expect across various services like User, Store, and Project and if there is any communication across A.User - B.User or A.Project - B.Store, etc?
Now moving to some of the important things you might want to consider post answering the above questions.
Consideration 1. If the data stores can be allowed to be in a single central place you can go ahead with a single cluster where all your micro-services can be deployed. But looking at the data provided I can assume you have multiple databases per customer and I would recommend to keep them separate and not introduce any coupling between them. Thus you may end up with one microservice or microservice per customer which talks only to that customer's database. ( more in fig.1)
Consideration 2. The customization as far I the norm goes should be separated from the service itself and your every service should have an input for configuration loading which will drive the service behavior. Again depending on how big your customization is there can be a limit to this configuration and in those cases, I woul recommend creating a new service with customizations built-in.
Consideration 3. This is a major factor for deciding the number of microservices you may have, but the boundary of each service should be defined by the domain, for example, a User service, a Store service, and a Project service. These are the vanilla services that interact with each other to produce a valid business scenario. And each of the customers is just specialized instances of these services.
ok Now that this is done lets gloss over your primary questions...
Des microservices can be reused across different projects?
-- Yes you can, but again it depends on how you have designed the business workflow, configuration injection.
If we do split, do we create a microservice that everybody uses or do we create an instance per custom build?
-- Yes this would be an ideal scenario enabling separation of concerns across projects as we do not want to mix data boundaries and client-specific sensitive configurations. That said there might be a case where the single microservice solution is what is demanded but should be done with caution.
If we create an instance of each microservice per custom build, won't it be too hard to manage the communication between all the microservices within the same custom builds?
-- Communication across microservice is an important part or factor which is more often than not unavoidable in most cases. Thus considering you will be requiring some form of cross microservice communication you can look at an enterprise bus or API communication based on your requirement. But it is a known triviality is my opinion.
Or do we stick with one instance per microservice that everybody uses and we specify the project source?
-- I would not recommend this as the example stated in your question for a module for database injections doesn't sound a good idea to me. This will cause a highly coupled system design. And this might also mean if one service fails all your customer sites go down. you surely wouldn't want that !!!
Now as it is said a picture is worth a thousand words...
As part of our journey towards API-led Connectivity, we have to group our resources (i.e. API endpoints) into multiple Mule applications for the experience APIs.
In order to have meaningful names for the Mule applications while maintaining the maximum re-usability, rather than associating the consumer names with the application names (which makes the experience API tightly coupled with the current application landscape), we propose to have Mule application names to reflect the essence of the business.
The list of the options are as follows. Which one do you think is more ideal? What approach have you used in your organization?
based on Channel/Consumer
A dedicated experience API for a consumer such as WEB, CRM, Mobile etc.
uri examples:
www.example.com/example-**web**-application/v1/
www.example.com/example-**crm**-application/v1/
www.example.com/example-**mobile**-application/v1/
Pro's: - applying channel specific policies is easier, management becomes easier, smaller outage window
Con's: - reusability reduces and chances of duplication of objects across api's increases
based on Business Domain
Company data model is used. Eg - Customer, Product, Payment etc.
uri examples:
www.example.com/example-**customers**-application/v1/
www.example.com/example-**products**-application/v1/
www.example.com/example-**payments**-application/v1/
Pro's: - promotes reusability, channel agnostic, same api can be used across different consumers.
Con's: management might get complex, larger outage window, multiple consumers might be impacted
based on Customer Journey
This approach is tied to the customer's lifecycle with the organization. Eg - Prospective Customer --> Lead --> Engage --> Payments --> Customer Retain
uri examples:
www.example.com/example-**prospect**-application/v1/
www.example.com/example-**lead**-application/v1/
www.example.com/example-**engage**-application/v1/
Pro's: channel agnostic, same api can be used across different consumers.
Con's: can get increasingly big and further breakdown might still be required
Thanks.
As far I understand your question; you would like to know what URIs to be using for the endpoints of the experience APIs, right?
Based on a recent blog entry from mulesoft (July 12 2017).
Experience APIs are:
Experience APIs are the means by which data can be
reconfigured so that it is most easily consumed by its intended
audience, all from a common data source, rather than setting up
separate point-to-point integrations for each channel. An Experience
API is usually created with API-first design principles where the API
is designed for the specific user experience in mind.
Based on the examples from MuleSoft and my understanding, the experience APIs are created for one given "experience"; web, virtual reality, mobile, etc...
You are trying to create an API for a given special experience to make the consumption of the API easy for this specific client.
According to my understanding the main goal on this level is not the re-usability. You focus on re-usability on the System API and Process API level, but the Experience APIs are supposed to make the life of the developers of the different clients easier by providing exactly the interface and data they need so they don't have to communicate directly with the system and process APIs, but they get a tailor-made API, suiting exactly their special needs.
Since the experience API is tailor-made for the special experience / channel / client-application; I think respresenting this in the URI is a good idea.
Kindly, can anyone differentiate between System api and Process api?
Please provide answer in Generic terms, as i am unable to find on internet.
A system api abstracts from an existing system. It talks to the system in the language of the system (e.g. SOAP, direct Java calls, SAP calls, etc.). To the outside world it offers a clean API (usually REST with http and json). When you do a good job implementing your system api, you can exchange your existing system with a different/new one without changing the api of your system api to the outside world: Just implement a new system api with different adapter logic.
A process api should talk REST on "both ends". It calls one or several system apis to do its job. The process api orchestrates different jobs.
When you need more information, do a search with "api led connectivity"
A System API is a layer you build on top of a system, which handles all system specific connection quirks and settings. It then exposes these resources and it's logic in a standard format (usualy REST but you're free to choose something else like SOAP) to the rest of your API's. Like Roger Butenuth states:
"When you do a good job implementing your system api, you can exchange
your existing system with a different/new one without changing the api
of your system api to the outside world: Just implement a new system
api with different adapter logic."
A process API is where you keep your logic and orchestration, it does not 'talk' to end systems directly but instead connects to system API's to get it's data.
A process API should idealy only talk REST on both sides and can aggegrate data from multiple systems.
An example of a complex process API would be an "items you've ordered" API which takes in a user id as it's input, then talks to the system API of a CRM system to get the id used by the "order history system API".
However this API might only return a list of orders without any article information besides an article id. So our Process API then enriches this list with Article information fetched from "article information system API" with the id from the list.
I know it's out of the scope of the question, but for the sake of completeness i'l shortly explain the third variant as well:
An Experience API can be seen as a doorway into your API network, every (type of) client has different information needs and can communicate in different protocols.
It is the Experience API's responsibility to provide ALL the information required by a client in a format they support.
This takes the responsibility away from a client to know where the information needs to be fetched from.
(Customer info from CRM, Order info from proprietary sys one, Article info from article DB)
This concept of design has as a bonus that if when for example, the mobile app your company is making, gets some new functionality which requires extra data.
You can update the "mobille app experience api", which would leave your "superexpensive IBM Experience api" unchanged. Cutting down on development costs as you don't need to implement any changes in your other api consumers which would be the case had you had only one api.
I think the main difference is where you are implementing business process and rules/logics.
System API, within the scope of your design, are atomic APIs which will be used to construct higher level API (experience APIs). Process API is the orchestration layer where you can use Mulesoft flows to implement business process or logic.
System APIs do the heavy lifting work of CRUD operations.
Process APIs focus on business logic
System API's are underlying all IT designs are center frameworks of records that are regularly not promptly accessible because of its many-sided quality and network concerns. APIs give a method for concealing that many-sided quality from the client while uncovering information and giving downstream protection from any interface changes or legitimization of those frameworks.
Process APIs exemplify the fundamental business forms that cooperate with source and target frameworks or channels through an arrangement of framework APIs. For instance, in a buy arrange process, there is some rationale that is regular crosswise over items, geologies and retail channels that can and ought to be refined into a solitary administration that would then be able to be called.
And you will get some more clarity from this article https://dzone.com/articles/api-the-backbone-of-the-software-industry-know-how
System API and Process API will be part of API led connectivity.
System API is like awrapper service to either a main data base or saas platform
Process API involves application logic to validate search or query parameters
Just in the design process at the minute but would like some advice from people who have deployed a similar solution and to share the experiences they have had.
I would suggest splitting it into two separate APIs, one public and one private. You have a lot more flexibility to make changes when your users are all internal vs. external. In addition, internal users typically need/expect more change in the system. The security considerations are also very different for internal vs. external APIs.
You can mitigate DRY issues by having the internal API call the external API where appropriate.