The Bezos API Mandate speaks in volumes about how externalized APIs must be designed.
However it is unclear from the points listed in the mandate as how databases for microservices are maintained.
Do teams (services) use a shared schema and manage data handling/processing with a separate microservice on their own (DAO service)?
Do teams (services) have their own isolated schemas and database engines?
Thank you!
Please go through the 12-factors of microservices.
The question to your answer in simple words is, Every microservices is having its isolated database( maybe the dedicated table or in NOSQL it's separate bucket for that microservice). And most important, only that microservice can interact with its databases: all other services must go through that service (e.g. via REST/HTTP or a message bus).
Read this link which gives a detailed explanation.
https://12factor.net/backing-services
See below URL::
https://www.nginx.com/blog/microservices-reference-architecture-nginx-twelve-factor-app/
Related
Context
Let's imagine a simple microservices architecture (e.g. 2-3 microservices). Microservices are domain-based, API gateway in place and everything is how it should be. At the same time, microservices APIs are consumed by public mobile applications, admin UI, and other services for S2S communication, hence, we have three possible APIs consumers. Depends on the consumer, response DTOs are different but the business process might be the same (e.g. response for GET /users endpoint has different DTOs for a consumer application and admin UI but technically the data is taken from the same DB).
Question
How do you segment APIs in that case? Do you use namespaces like external, internal and etc?
Also, feel free to share your experience of how you segment APIs.
Thanks in advance!
From my point of view, the APIs should be different depending on the type of consumer that is going to use them.
For example, talking about your use case, It couldn't be the same API one that is intended to provide simple user information that the one used by an administrator. You should define two different APIs in this case, with different paths like internal/users/ and external/users as you said, and internally these two endpoints can use the same logic.
This separation is not only good in order to return different dtos in each endpoint but also to define different security (authentication/authorization) mechanisms for each API because I suppose that these requirements will be different for an admin API that for a general user one
It depends a bit on the philosophy you want to adopt.
The one suggested by #JArgente is good, in that you'd get good separation, and the role of each is (or at least should be) very clear.
The other approach is layering, which (for the OO programmers out there) is a bit like developing overloads for a method. It assumes that the data required by the derived API's is provided by the base API. So:
Develop a base API that provides all the data this API family needs to provide. This API might be the one that internal users use (e.g. Admin User), and it could require authentication.
Develop a public facing API that consumes the base API. This one would be your public-facing one.
Each API has a separate API Spec; depending in how you do this you can leverage inheritance at the Spec level.
Each API also has an actual endpoint which triggers some sort of processing - e.g. logic within the API Gateway itself, or logic handled within a downstream component like a microservice.
The public-facing one can be anonymous, as long as something (e.g. the API Gateway) can make an authenticated call against the base API, using some kind of 'service account'.
The advantage here is that you still get good separation between different API's and their consumers, but you also get the advantages of inheritance, so that code duplication is reduced (testing effort isn't so diffuse, etc).
This approach also allows you to run the endpoints on the same API Gateway, or deployed on separate ones (internal vs external).
I have currently delivered a SPA (Vue.js) web application with a Java API backend. Everything is currently sitting in AWS, with the frontend being in CloudFront and the backend in ECS connecting to a RDS instance.
As part of the next phase of delivery, we are creating a B2B API. My question is that of architectural design and deployment strategy, is it commonplace to just extend the existing API with B2B functionality? Should I keep them both separate with an API gateway in front? We envisage that the B2B use eventually will outgrow the SPA use case so the initial deployment configuration needs to have the most flexibility to grow and expand.
Is there some sort of best practice here? I imagine that a lot of code would be similar between the two backends as well.
Thanks,
Terry
First off - Deciding on service boundaries is one of the most difficult problems in a service oriented architecture design and the answer strongly depends on your exact domain requirements.
Usually I would split service implementations by the domain/function as well as by organizational concerns (e.g. separate teams developing them) and not by their target audience. This usually avoids awkward situation where team responsibility is not clear, etc. If it will grow into a very large project there may also be a need for multiple layers of services and shared libraries - And at that point you would likely run into necessary re-factorings / restructurings.
So if there is a very large overlap in function between your b2b and the regular api you may not want to split the implementation.
However, you may also have to consider how the service access is provided and an API Gateway could help with providing different endpoints for the different audiences, different charging models, different auth options, etc. Depending on your exact requirements an API Gateway may not be enough and this may also require another thin service layer implementation that uses common domain services.
We developed a monolithic API to be used as a SAAS.
In the company we also develop custom build for some customers.
Some of our customers are asking for some features that are already implemented in the monolithic application.
We are thinking about splitting our API into microservices, but our major questions are the following :
Does microservices can be reuse across different projects ?
If we do split, do we create a microservice that everybody use or do we create an instance per custom build ?
E.G :
project A use "User", "Project" so we deploy 2 microservices
project B use "User", "Project", "Store" so we deploy 3 microservices
total number of microservices deployed : 5
If we create an instance of each microservice per custom build, won't be too hard to manage the communication between all the microservices within the same custom build ?
Or do we stick with one instance per microservice that everybody use and we specify the project source ?
As we are using C# GraphQL.
We also thought about creating Nuget package for each component, so each package will contains :
Exposed GraphQL Queries / Mutations
His own db
His own logic
E.G :
- Api A install "User" & "Project" packages
- 3 db are instantiated "Api.A", "Api.A.User", "Api.A.Project"
- Api B install "User", "Project" & "Store" packages
- 4 db are instantiated "Api.B", "Api.B.User", "Api.B.Project" & "Api.B.Store"
But does it make sense to do that ?
In my mind it could very similar from Hangfire https://www.hangfire.io/
Note that we are currently using AWS Serverless to host our applications.
An important point is that we are a small team 2-4.
We are very open minded so any suggestion is good to hear.
Thank you !
First of all, I would like to say is that there is no right way here and I am providing my point of view from the way we have already done things hoping it will guide you in finding a solution best suited for your requirements.
So to understand your dilemma, you have a base vanilla product which is an API SAAS and there is a customized deployment for some customers as well. But as you are having to build custom deployments for each customer you are noticing a common pattern, wherein a lot of the functionality is repeated across the SAAS for each customer.
Now assuming I have the requirement correct, I would say micro-services will provide definite benefits in your case in terms of scaling and customer-specific customization which will be managed by independent teams.
But a lot of this depends on how your business logic is structured and how big and vast your customization is. Some of these questions should drive your solution are.
Can you store Customer-specific data in a central data store or at customers' end ? & How are your databases going to be structured and how many of them?
How big are the customizations ? are they cosmetic or workflow adhering?
How much cross-communication you expect across various services like User, Store, and Project and if there is any communication across A.User - B.User or A.Project - B.Store, etc?
Now moving to some of the important things you might want to consider post answering the above questions.
Consideration 1. If the data stores can be allowed to be in a single central place you can go ahead with a single cluster where all your micro-services can be deployed. But looking at the data provided I can assume you have multiple databases per customer and I would recommend to keep them separate and not introduce any coupling between them. Thus you may end up with one microservice or microservice per customer which talks only to that customer's database. ( more in fig.1)
Consideration 2. The customization as far I the norm goes should be separated from the service itself and your every service should have an input for configuration loading which will drive the service behavior. Again depending on how big your customization is there can be a limit to this configuration and in those cases, I woul recommend creating a new service with customizations built-in.
Consideration 3. This is a major factor for deciding the number of microservices you may have, but the boundary of each service should be defined by the domain, for example, a User service, a Store service, and a Project service. These are the vanilla services that interact with each other to produce a valid business scenario. And each of the customers is just specialized instances of these services.
ok Now that this is done lets gloss over your primary questions...
Des microservices can be reused across different projects?
-- Yes you can, but again it depends on how you have designed the business workflow, configuration injection.
If we do split, do we create a microservice that everybody uses or do we create an instance per custom build?
-- Yes this would be an ideal scenario enabling separation of concerns across projects as we do not want to mix data boundaries and client-specific sensitive configurations. That said there might be a case where the single microservice solution is what is demanded but should be done with caution.
If we create an instance of each microservice per custom build, won't it be too hard to manage the communication between all the microservices within the same custom builds?
-- Communication across microservice is an important part or factor which is more often than not unavoidable in most cases. Thus considering you will be requiring some form of cross microservice communication you can look at an enterprise bus or API communication based on your requirement. But it is a known triviality is my opinion.
Or do we stick with one instance per microservice that everybody uses and we specify the project source?
-- I would not recommend this as the example stated in your question for a module for database injections doesn't sound a good idea to me. This will cause a highly coupled system design. And this might also mean if one service fails all your customer sites go down. you surely wouldn't want that !!!
Now as it is said a picture is worth a thousand words...
Just in the design process at the minute but would like some advice from people who have deployed a similar solution and to share the experiences they have had.
I would suggest splitting it into two separate APIs, one public and one private. You have a lot more flexibility to make changes when your users are all internal vs. external. In addition, internal users typically need/expect more change in the system. The security considerations are also very different for internal vs. external APIs.
You can mitigate DRY issues by having the internal API call the external API where appropriate.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
Do any APIs/Libraries/tools exist that act as adapters/provider interfaces for accessing different cloud storage services through a common interface? Something similar to ODBC or OLE-DB, except for cloud storage instead of databases.
Such that, if I wrote a front end for taking notes, and I utilized such an API, and let the user provide configuration for which cloud storage provider they have an account with, the API library would handle translating my cloud.Save() call into the commands specific to whiever provider was being utilized. This would allow my front-end app to be cloud storage provider agnostic.
So maybe I wrote some chrome extension or portable thumb drive app for storing notes, or encrypting and storing passwords, or some such, and you tell it which cloud storage provider you have an account with, and it uses it for syncing. This way your use of that tool doesn't tie you to a specific cloud provider. As long as you backup your data, you could migrate to another provider and just reconfigure the app should you become unhappy with that provider or they go bankrupt.
WebDAV for example is one potential candidate since it seems some storage services offer it, but that is not quite what I have in mind, since it depends on the storage providers to offer that as an option. I also don't know enough about WebDAV to know if it really would serve in the capacity I'm imagining. But feel free to post that as an option with pros/cons for comment/discussion.
I more imagine something that is a middle layer external to each cloud provider. Of course since each provider offers a different web service for interacting with files, the middle layer would have adapter for each backend. But on the front-end, it would expose a common API that is provider agnostic.
Does anything of this type exist?
Even just an open source GUI that allows you to store files in any provider, which would imply that in its source code exists the beginnings of such a middle layer. I would think someone has already made a tool that helps you unify all the free GB that you can get from various services. Sort of a JBOD layer for the cloud(although that is not the goal of this post, the point being such a tool accessing many different services would imply it has the beginnings of a middle layer for standardizing access to them).
My main interest though is in abstractions for personal cloud storage services, that would be appropriate for applications used by individuals, to put the control of storage in the hands of the individual so that they can have the freedom to move between personal cloud storage services. It seems what I've found so far is more oriented for CDN, websites, or services.
Please make seperate posts per suggestion so that votes and comments/discussion can take place specific to that suggestion.
Kloudless provides a common API to several different cloud storage APIs (Dropbox, Box, GDrive, OneDrive, etc.). Kloudless also provides SDKs in popular languages and UI widgets to handle authentication and other user interactions.
You can find more information and sign up here: https://kloudless.com/
Disclosure: I work at Kloudless.
Apache jclouds presents cloud-agnostic abstractions, with stable implementations of ComputeService and BlobStore.
visit https://jclouds.apache.org/
Apache jclouds® is an open source multi-cloud toolkit for the Java
platform that gives you the freedom to create applications that are
portable across clouds while giving you full control to use
cloud-specific features.
Apache Libcloud: "a unified interface to the cloud"
http://libcloud.apache.org/
A couple of months ago I did a survey of personal cloud storage aggregator services and applications. And one seems relevant to your question.
Oxtio is a service that connects multiple cloud storage services and includes a WebDAV service for accessing it's own service.
Cloud storage providers each have different specifics which makes it hard to use exactly one interface for all (or even some) of them. CloudBlackbox package of our SecureBlackbox product offers a unified interface for major storage providers (S3, Azure, Google Drive, SkyDrive/OneDrive, Dropbox) with focus on security of the data, but due to mentioned specifics we have individual classes (descendants of one superclass) to serve each provider. SecureBlackbox is available for use from .NET, Java, C++ on Windows and Delphi.
Check out Boto, a highly regarded Python library which provides an abstraction layer atop Amazon's S3 and Google Cloud Storage.
https://github.com/boto/boto
-StorageMadeEasy (SME)
-Otixo (But they do not offer FREE tier anymore since Feb 2013)
-Joukuu
-Gladinet
-Egistec CloudHub
...
All of above allows you to connect several cloud storages, but they do not actually combine it.
If you wan to combine several personal cloud storages, you need to make it yourself, which is what I am doing for the past few months.
So far I have combined several clouds (Dropbox, Box, Google Drive, Skydrive) using their Android API/SDK, then I process the data splitting/merging/compression/encryption inside my Android application (not a good choice, just for the sake of prototype)
In the future, maybe I will add more providers that has an API, such as Amazon S3, SugarSync, but right now there is lack of manpower.
If you just want to connect multiple clouds on Android (not combining), then you can try ES File Explorer or ASTRO File Manager, and several other applications
I think webdav is the ultimate protocol:
webdav->dropdav->dropbox
webdav->box.net
webdav->DAV-pocket->google drive
webdav->Otixo(free for 14 days)->SugarSync